Who and How is responsible for released installations packages and 3rd party packs? (e.g. on https://yum.postgresql.org/9.6/redhat/rhel-7.3-x86_64/)

2018-08-01 Thread Alexandru Lazarev
Hi PG Community,

In my company I found that PG Installation on deployed OS Images are takne
from here: https://yum.postgresql.org/9.6/redhat/rhel-7.3-x86_64/

We are using PG 9.6.5 or 9.6.7 + pgpool + plv8 + others

Some or RPMs for CentOS are taken from that URL (PG Installation, plv8).

My question is:
Who is building RPMs and uploading to that URL? What are the criteria to
build one RPM or other and their versions?
Why I am asking:
I saw on URL there are PG 9.6.8 and 9.6.9 - Are there maintained only
latest 2 build releases?

plv8 - there are versions 2.0.0-1 and 2.1.0, since latest plv8 are already
2.3.7 and latest for 2.1.X is 2.1.3 contating major fixes


Thanks,
AlexL


Re: Pg_rewind cannot load history wal

2018-08-01 Thread Abhinav Mehta
Whenever you do switch-over, postgres-wal creates a new timeline, which 
simplifies managing PITR process.

During switch-over(promoting B as master) you had some delta records written to 
A, that’s where it causes this timeline issue.

Now since A had some delta records, it can’t replicate from B and hence you are 
getting that issue.

Now once your master A can’t become slave of B.
— A

> On 02-Aug-2018, at 2:39 AM, Richard Schmidt  
> wrote:
> 
> We have been struggling to get pr_rewind to work.  In desperation we have 
> been trying to make the use case as simple as possibleJ
>  
> We have two databases servers running Postgres 10 on two different machine in 
> the normal Primary/Standby configuration.
> Both machines write their WAL archive logs to the same shared drive (called 
> /ice_dev/wal_archive).
> The configuration  has the following terms
>archive_mode = always
>   archive_command = 'test ! -f /ice-dev/wal_archive/%f && cp %p 
> /ice-dev/wal_archive/%f'
>   full_page_writes = on
>   wal_log_hints = on
> Checksums are enabled
>  
> Our procedure that runs on machine A and B is as follows:
>  
> Build new databases on A and B, and configure A as Primary and B as Standby 
> databases.
> Make some changes to the A (the primary) and check that they are replicated 
> to the B (the standby)
> Promote B to be the new primary
> Switch of the A (the original primary)
> Add the replication slot to B (the new primary) for A (soon to be standby)
> Add a recovery.conf to A (soon to be standby). File contains 
> recovery_target_timeline = 'latest' and restore_command = 'cp 
> /ice-dev/wal_archive/%f "%p"
> Run pg_rewind on A – this appears to work as it returns the message ‘source 
> and target cluster are on the same timeline no rewind required’;
> Start up server A (now a slave)
>  
> At this point A is in a read only mode but not replicating. Its logs contain 
> the following repeating message
>  
> 2018-08-01 20:30:58 UTC [7257]: [1] user=,db=,app=,client= FATAL:  could not 
> start WAL streaming: ERROR:  requested starting point 0/600 on timeline 1 
> is not in this server's history
> DETAIL:  This server's history forked from timeline 1 at 0/57639D0.
> cp: cannot stat ‘/ice-dev/wal_archive/0002.history’: No such file or 
> directory
> cp: cannot stat ‘/ice-dev/wal_archive/0003.history’: No such file or 
> directory
> cp: cannot stat ‘/ice-dev/wal_archive/0002.history’: No such file or 
> directory
> 2018-08-01 20:30:58 UTC [6840]: [48] user=,db=,app=,client= LOG:  new 
> timeline 2 forked off current database system timeline 1 before current 
> recovery point 0/698
> cp: cannot stat ‘/ice-dev/wal_archive/00010006’: No such file 
> or directory
>  
> We can see the 0002.history file in B’s wal directory…..but it never 
> appears in the wal_archive directory – not even if we issue a checkout or 
> even restart the server.
> 0003.history does not appear to exist on either of the machines.
>  
> Any ideas what we are doing wrong?
> Thanks. Richard
>  
>  
>  
>  
>  
>  
> 
> This email and any attachments may contain confidential information. If you 
> are not the intended recipient, your use or communication of the information 
> is strictly prohibited. If you have received this message in error please 
> notify MetService immediately.
> 



Problem building libpq

2018-08-01 Thread Igor Korot
Hi,
I recently updated my compiler going from MSVC 2010 to MSVC 2017.
Now I'm trying to re-build the libpq with it.

[code]
NMAKE : fatal error U1064: MAKEFILE not found and no target specified
Stop.

C:\Program Files (x86)\Microsoft Visual Studio\2017\Community>cd c:\Users\Igor

c:\Users\Igor>nmake

Microsoft (R) Program Maintenance Utility Version 14.14.26433.0
Copyright (C) Microsoft Corporation.  All rights reserved.

NMAKE : fatal error U1064: MAKEFILE not found and no target specified
Stop.

c:\Users\Igor>cd OneDrive\Documents\dbhandler_app\libpg

c:\Users\Igor\OneDrive\Documents\dbhandler_app\libpg>nmake /f win32.mak DEBUG=1

Microsoft (R) Program Maintenance Utility Version 14.14.26433.0
Copyright (C) Microsoft Corporation.  All rights reserved.

cd include
NMAKE : fatal error U1077: 'cd' : return code '0x1'
Stop.
[/code]

What am I missing?

Thank you.



Pg_rewind cannot load history wal

2018-08-01 Thread Richard Schmidt
We have been struggling to get pr_rewind to work.  In desperation we have been 
trying to make the use case as simple as possible:)

We have two databases servers running Postgres 10 on two different machine in 
the normal Primary/Standby configuration.
Both machines write their WAL archive logs to the same shared drive (called 
/ice_dev/wal_archive).
The configuration  has the following terms
   archive_mode = always
  archive_command = 'test ! -f /ice-dev/wal_archive/%f && cp %p 
/ice-dev/wal_archive/%f'
  full_page_writes = on
  wal_log_hints = on
Checksums are enabled

Our procedure that runs on machine A and B is as follows:


  1.  Build new databases on A and B, and configure A as Primary and B as 
Standby databases.
  2.  Make some changes to the A (the primary) and check that they are 
replicated to the B (the standby)
  3.  Promote B to be the new primary
  4.  Switch of the A (the original primary)
  5.  Add the replication slot to B (the new primary) for A (soon to be standby)
  6.  Add a recovery.conf to A (soon to be standby). File contains 
recovery_target_timeline = 'latest' and restore_command = 'cp 
/ice-dev/wal_archive/%f "%p"
  7.  Run pg_rewind on A - this appears to work as it returns the message 
'source and target cluster are on the same timeline no rewind required';
  8.  Start up server A (now a slave)

At this point A is in a read only mode but not replicating. Its logs contain 
the following repeating message

2018-08-01 20:30:58 UTC [7257]: [1] user=,db=,app=,client= FATAL:  could not 
start WAL streaming: ERROR:  requested starting point 0/600 on timeline 1 
is not in this server's history
DETAIL:  This server's history forked from timeline 1 at 0/57639D0.
cp: cannot stat '/ice-dev/wal_archive/0002.history': No such file or 
directory
cp: cannot stat '/ice-dev/wal_archive/0003.history': No such file or 
directory
cp: cannot stat '/ice-dev/wal_archive/0002.history': No such file or 
directory
2018-08-01 20:30:58 UTC [6840]: [48] user=,db=,app=,client= LOG:  new timeline 
2 forked off current database system timeline 1 before current recovery point 
0/698
cp: cannot stat '/ice-dev/wal_archive/00010006': No such file 
or directory

We can see the 0002.history file in B's wal directory.but it never 
appears in the wal_archive directory - not even if we issue a checkout or even 
restart the server.
0003.history does not appear to exist on either of the machines.

Any ideas what we are doing wrong?
Thanks. Richard








This email and any attachments may contain confidential information. If you are 
not the intended recipient, your use or communication of the information is 
strictly prohibited. If you have received this message in error please notify 
MetService immediately.


Re: Template0 datfrozenxid age is 160million and progressing

2018-08-01 Thread Stephen Eilert
160 million is a very low number. I manage production databases which
reach this value in a day easily. As other said, 200 million is the
default threshold for the anti-wraparound vacuums. I wouldn't worry,
specially for template0.
That said, there is nothing preventing you from temporarily changing
DATALLOWCONN, running vaccuum (which should be very quick) and then
changing it back. But you should not have to.  I do that in our
production database, but only because it suffers from a bad schema
design and we ended up with thousands of tables, which is too much for
the autovacuum workers to cope alone, so they need a manual "boost". I
still don't disable autovacuum.
I don't really understand the bit about autovacuum changing query
performance. In which scenario would it be preferable to have outdated
analyzer statistics? This would be like running a system with garbage
collection disabled because GC can increase the amount of free memory.
That's the whole point.
— Stephen



Re: Adding terminal title support for psqlrc

2018-08-01 Thread Adrian Klaver

On 08/01/2018 10:55 AM, ik wrote:

hi,

Thank you, it does not effect the terminal title.

As Adrian answered me, it is not supported :/


I was wrong:

https://petereisentraut.blogspot.com/2009/09/how-to-set-xterm-titles-in-psql.html


To get this to work in Konsole I had to edit the profile to have Tab 
title format use %w (Window title set by shell).







On Wed, Aug 1, 2018 at 4:47 PM, Tom Lane > wrote:


ik mailto:ido...@gmail.com>> writes:
> Is there a way to add terminal title information for psqlrc that cn 
display
> what schema I am, operation that is execute and stuff like that?

Most terminal programs recognize escape sequences to set the window
title.  So you could set values for PROMPTn that cause updates in
the title.  There's an example in


https://www.postgresql.org/docs/devel/static/app-psql.html#APP-PSQL-PROMPTING



although it's just talking about a color change.

                         regards, tom lane





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Adding terminal title support for psqlrc

2018-08-01 Thread ik
hi,

Thank you, it does not effect the terminal title.

As Adrian answered me, it is not supported :/



On Wed, Aug 1, 2018 at 4:47 PM, Tom Lane  wrote:

> ik  writes:
> > Is there a way to add terminal title information for psqlrc that cn
> display
> > what schema I am, operation that is execute and stuff like that?
>
> Most terminal programs recognize escape sequences to set the window
> title.  So you could set values for PROMPTn that cause updates in
> the title.  There's an example in
>
> https://www.postgresql.org/docs/devel/static/app-psql.
> html#APP-PSQL-PROMPTING
>
> although it's just talking about a color change.
>
> regards, tom lane
>


Re: Template0 datfrozenxid age is 160million and progressing

2018-08-01 Thread Vik Fearing
On 01/08/18 18:36, Andres Freund wrote:
> It's really annoying to
> get a hard to kill ant-wraparound autovacuum on an insert only table,
> where it's the only thing that'll trigger the autovacuum.

Somebody should do something about that.
-- 
Vik Fearing  +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support



Re: Template0 datfrozenxid age is 160million and progressing

2018-08-01 Thread Andres Freund
On 2018-08-01 12:20:15 -0400, Alvaro Herrera wrote:
> On 2018-Aug-01, Andres Freund wrote:
> 
> > On 2018-08-01 12:07:16 -0400, Tom Lane wrote:
> > > Andres Freund  writes:
> > > > On 2018-08-01 10:24:24 -0400, Tom Lane wrote:
> > > >> IMO, the action you need to take is enabling autovacuum.  We've
> > > >> seen many many people go down the path you are taking, and it's
> > > >> generally led to no good in the end.  Manual vacuuming tends
> > > >> to miss stuff, and it cannot react adequately to activity spikes.
> > > 
> > > > But it shouldn't matter here, autovacuum will start regardless, no?
> > > 
> > > Sure, once it decides that emergency anti-wraparound vacuuming is
> > > necessary.  I really doubt the OP wants that to happen; it's the
> > > exact opposite of non-intrusive.
> > 
> > That's solely what would trigger it were autovacuum enabled, too? I've
> > complained about "emergency anti-wraparound" beeing anything but
> > emergency (they're largely unavoidable unless you manually script it),
> > but they're what happen once autovacuum_freeze_max_age is reached, and
> > that's the only trigger for vacuuming old relations independent of other
> > activity?
> 
> With a small database like template0, it doesn't matter.  The vacuuming
> is going to be over before OP realizes it has happened anyway.
> Certainly having it happen on a normal-sized table can become
> problematic, but presumably OP has taken steps to avoid it when
> disabling autovacuum (which is why only template0 is getting into
> trouble.)

Right.


> I think emergency vacuum should behave differently (not scan indexes,
> just apply HOT page prune and clear old XIDs/multixacts), which would
> make it much faster, but that's a separate line of thought (and of
> development).

What I'd love is for freeze_max_age triggered vacuums *not* to be
emergency vacuums. They should just be normal ones. There should be a
separate GUC that triggers the emergency bit. It's really annoying to
get a hard to kill ant-wraparound autovacuum on an insert only table,
where it's the only thing that'll trigger the autovacuum.

Greetings,

Andres Freund



Re: Template0 datfrozenxid age is 160million and progressing

2018-08-01 Thread Alvaro Herrera
On 2018-Aug-01, Andres Freund wrote:

> On 2018-08-01 12:07:16 -0400, Tom Lane wrote:
> > Andres Freund  writes:
> > > On 2018-08-01 10:24:24 -0400, Tom Lane wrote:
> > >> IMO, the action you need to take is enabling autovacuum.  We've
> > >> seen many many people go down the path you are taking, and it's
> > >> generally led to no good in the end.  Manual vacuuming tends
> > >> to miss stuff, and it cannot react adequately to activity spikes.
> > 
> > > But it shouldn't matter here, autovacuum will start regardless, no?
> > 
> > Sure, once it decides that emergency anti-wraparound vacuuming is
> > necessary.  I really doubt the OP wants that to happen; it's the
> > exact opposite of non-intrusive.
> 
> That's solely what would trigger it were autovacuum enabled, too? I've
> complained about "emergency anti-wraparound" beeing anything but
> emergency (they're largely unavoidable unless you manually script it),
> but they're what happen once autovacuum_freeze_max_age is reached, and
> that's the only trigger for vacuuming old relations independent of other
> activity?

With a small database like template0, it doesn't matter.  The vacuuming
is going to be over before OP realizes it has happened anyway.
Certainly having it happen on a normal-sized table can become
problematic, but presumably OP has taken steps to avoid it when
disabling autovacuum (which is why only template0 is getting into
trouble.)

I think emergency vacuum should behave differently (not scan indexes,
just apply HOT page prune and clear old XIDs/multixacts), which would
make it much faster, but that's a separate line of thought (and of
development).

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Template0 datfrozenxid age is 160million and progressing

2018-08-01 Thread Andres Freund
On 2018-08-01 12:07:16 -0400, Tom Lane wrote:
> Andres Freund  writes:
> > On 2018-08-01 10:24:24 -0400, Tom Lane wrote:
> >> IMO, the action you need to take is enabling autovacuum.  We've
> >> seen many many people go down the path you are taking, and it's
> >> generally led to no good in the end.  Manual vacuuming tends
> >> to miss stuff, and it cannot react adequately to activity spikes.
> 
> > But it shouldn't matter here, autovacuum will start regardless, no?
> 
> Sure, once it decides that emergency anti-wraparound vacuuming is
> necessary.  I really doubt the OP wants that to happen; it's the
> exact opposite of non-intrusive.

That's solely what would trigger it were autovacuum enabled, too? I've
complained about "emergency anti-wraparound" beeing anything but
emergency (they're largely unavoidable unless you manually script it),
but they're what happen once autovacuum_freeze_max_age is reached, and
that's the only trigger for vacuuming old relations independent of other
activity?

Greetings,

Andres Freund



Re: Template0 datfrozenxid age is 160million and progressing

2018-08-01 Thread Tom Lane
Andres Freund  writes:
> On 2018-08-01 10:24:24 -0400, Tom Lane wrote:
>> IMO, the action you need to take is enabling autovacuum.  We've
>> seen many many people go down the path you are taking, and it's
>> generally led to no good in the end.  Manual vacuuming tends
>> to miss stuff, and it cannot react adequately to activity spikes.

> But it shouldn't matter here, autovacuum will start regardless, no?

Sure, once it decides that emergency anti-wraparound vacuuming is
necessary.  I really doubt the OP wants that to happen; it's the
exact opposite of non-intrusive.

regards, tom lane



Re: Template0 datfrozenxid age is 160million and progressing

2018-08-01 Thread Andres Freund
Hi,

On 2018-08-01 10:24:24 -0400, Tom Lane wrote:
> Rijo Roy  writes:
> > Thanks for the advice. Autovacuum is disabled in the environment and I 
> > cannot take a call on enabling it back. I can only run manual vacuum on the 
> > database as the dev team fears autoanalyze changing the query performance. 
> > Do you still think we don't need to take any actions. 
> 
> IMO, the action you need to take is enabling autovacuum.  We've
> seen many many people go down the path you are taking, and it's
> generally led to no good in the end.  Manual vacuuming tends
> to miss stuff, and it cannot react adequately to activity spikes.

But it shouldn't matter here, autovacuum will start regardless, no?

Greetings,

Andres Freund



Re: ALTER ROLE SET search_path produced by pg_dumpall gives : ERROR: syntax error at or near "$" .

2018-08-01 Thread Tom Lane
Achilleas Mantzios  writes:
> On 30/07/2018 17:00, Tom Lane wrote:
>> It looks like your hack was to work around the bug that was fixed
>> properly in 742869946.  You should be able to undo that now ...

> On pgsql 10.4 , I reverted (undid) this patch/hack :

> -   GUC_LIST_INPUT | GUC_LIST_QUOTE
> +   GUC_LIST_INPUT | GUC_REPORT

What I meant was you should be able to undo the removal of GUC_LIST_QUOTE.
The other change is independent, and it seems to be about working around
a bug in pgbouncer.  You ought to discuss that with the pgbouncer folk.

> I tested with :
> GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_REPORT
> as well, but then I get the re-quote problem I was telling about :
> At first connection the search path is set correctly : 
> bdynacom,epaybdynacom,epay,"$user", public
> but after server disconnection and re-connection the search path becomes : 
> "bdynacom,epaybdynacom,epay,""$user"", public"
> which is wrong.

This seems like it might be a bug in pgbouncer as well; or at least
an oversight, in that it's not expecting any GUC_REPORT parameters
to need GUC_LIST_QUOTE treatment.

regards, tom lane



Re: Weird behaviour of the planner

2018-08-01 Thread Guillaume Lelarge
2018-08-01 16:59 GMT+02:00 David Rowley :

> On 2 August 2018 at 02:48, Guillaume Lelarge 
> wrote:
> > EXPLAIN (ANALYZE) SELECT DISTINCT * FROM gleu2;
> >
> >QUERY PLAN
> > 
> -
> >  Unique  (cost=12005.97..12049.20 rows=1 width=1430) (actual
> > time=20055.294..20323.348 rows=1 loops=1)
> >->  Sort  (cost=12005.97..12006.30 rows=132 width=1430) (actual
> > time=20055.290..20105.738 rows=6 loops=1)
> >  Sort Key: (... 130 columns ...)
> >  Sort Method: external sort  Disk: 84464kB
> >  ->  Seq Scan on gleu2  (cost=0.00..12001.32 rows=132 width=1430)
> > (actual time=0.109..114.142 rows=6 loops=1)
> >  Planning time: 10.012 ms
> >  Execution time: 20337.854 ms
> > (7 rows)
> >
> > That looks quite good. The total cost is 12049, so I expect this plan to
> > have the smaller cost as it's the choosen plan. Now, I'm disabling Sort,
> and
> > here is what I get:
> >
> > SET enable_sort TO off;
> > EXPLAIN (ANALYZE) SELECT DISTINCT * FROM gleu2;
> >
> >QUERY PLAN
> > 
> -
> >  HashAggregate  (cost=12044.22..12044.23 rows=1 width=1430) (actual
> > time=508.342..508.343 rows=1 loops=1)
> >Group Key: (... 130 columns ...)
> >->  Seq Scan on gleu2  (cost=0.00..12001.32 rows=132 width=1430)
> (actual
> > time=0.036..57.088 rows=6 loops=1)
> >  Planning time: 1.335 ms
> >  Execution time: 508.529 ms
> > (5 rows)
>
> When the planner adds a new path it compares the cost not exactly, but
> with a 'fuzz factor'.  It's very likely that the hashagg path did not
> make it as it was not fuzzily any cheaper than the unique path. By
> default, this fuzz factor is 1%.
>
>
That may definitely be it. There's not much of a difference in the total
costs.

It seems in your case the costs don't quite match reality which is
> quite likely due to the poor row estimates on "gleu2".  Has that table
> been analyzed recently? or is there some reason that auto-vacuum is
> not getting to it?
>
>
This is a small test case of a much bigger query joining a large number of
tables, materialized views, views (calling functions), etc. The actual plan
contains 84 nodes (32 scans, lots of joins, and a few other nodes). The
mis-estimate is to follow what the big query gives me.

There's a bit more reading of what I'm talking about in
> https://github.com/postgres/postgres/blob/master/src/
> backend/optimizer/util/pathnode.c#L141
>
>
I'm gonna read that. Thank you.


-- 
Guillaume.


Re: Weird behaviour of the planner

2018-08-01 Thread David Rowley
On 2 August 2018 at 02:48, Guillaume Lelarge  wrote:
> EXPLAIN (ANALYZE) SELECT DISTINCT * FROM gleu2;
>
>QUERY PLAN
> -
>  Unique  (cost=12005.97..12049.20 rows=1 width=1430) (actual
> time=20055.294..20323.348 rows=1 loops=1)
>->  Sort  (cost=12005.97..12006.30 rows=132 width=1430) (actual
> time=20055.290..20105.738 rows=6 loops=1)
>  Sort Key: (... 130 columns ...)
>  Sort Method: external sort  Disk: 84464kB
>  ->  Seq Scan on gleu2  (cost=0.00..12001.32 rows=132 width=1430)
> (actual time=0.109..114.142 rows=6 loops=1)
>  Planning time: 10.012 ms
>  Execution time: 20337.854 ms
> (7 rows)
>
> That looks quite good. The total cost is 12049, so I expect this plan to
> have the smaller cost as it's the choosen plan. Now, I'm disabling Sort, and
> here is what I get:
>
> SET enable_sort TO off;
> EXPLAIN (ANALYZE) SELECT DISTINCT * FROM gleu2;
>
>QUERY PLAN
> -
>  HashAggregate  (cost=12044.22..12044.23 rows=1 width=1430) (actual
> time=508.342..508.343 rows=1 loops=1)
>Group Key: (... 130 columns ...)
>->  Seq Scan on gleu2  (cost=0.00..12001.32 rows=132 width=1430) (actual
> time=0.036..57.088 rows=6 loops=1)
>  Planning time: 1.335 ms
>  Execution time: 508.529 ms
> (5 rows)

When the planner adds a new path it compares the cost not exactly, but
with a 'fuzz factor'.  It's very likely that the hashagg path did not
make it as it was not fuzzily any cheaper than the unique path. By
default, this fuzz factor is 1%.

It seems in your case the costs don't quite match reality which is
quite likely due to the poor row estimates on "gleu2".  Has that table
been analyzed recently? or is there some reason that auto-vacuum is
not getting to it?

There's a bit more reading of what I'm talking about in
https://github.com/postgres/postgres/blob/master/src/backend/optimizer/util/pathnode.c#L141

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: ALTER ROLE SET search_path produced by pg_dumpall gives : ERROR: syntax error at or near "$" .

2018-08-01 Thread Achilleas Mantzios

On 30/07/2018 17:00, Tom Lane wrote:

Achilleas Mantzios  writes:

On 30/07/2018 16:51, Tom Lane wrote:

Hm, that's not happening for me:

You (and Adrian) are right. This is due to our own tweaking (which I had 
forgotten). Sorry for the false alarm.

It looks like your hack was to work around the bug that was fixed
properly in 742869946.  You should be able to undo that now ...

Hello Tom,
On pgsql 10.4 , I reverted (undid) this patch/hack :

root@smadev:/usr/local/src/postgresql-10.4# diff -u 
./src/backend/utils/misc/guc.c ./src/backend/utils/misc/guc.c.hack
--- ./src/backend/utils/misc/guc.c  2018-08-01 16:22:30.901629000 +0300
+++ ./src/backend/utils/misc/guc.c.hack 2018-08-01 15:45:15.893348000 +0300
@@ -3266,7 +3266,7 @@
    {"search_path", PGC_USERSET, CLIENT_CONN_STATEMENT,
    gettext_noop("Sets the schema search order for names that 
are not schema-qualified."),
    NULL,
-   GUC_LIST_INPUT | GUC_LIST_QUOTE
+   GUC_LIST_INPUT | GUC_REPORT
    },
    _search_path,
    "\"$user\", public",

And (i.e. with stock 10.4) I tested with pgbouncer again in transaction mode, and still, (i.e. without GUC_REPORT) , pgbouncer client does not get from server and does not store the search path after 
the first server connection, and therefore, after server disconnects , the second time the client tries to connect it does not set the search_path .

I tested with :
GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_REPORT
as well, but then I get the re-quote problem I was telling about :
At first connection the search path is set correctly : 
bdynacom,epaybdynacom,epay,"$user", public
but after server disconnection and re-connection the search path becomes : 
"bdynacom,epaybdynacom,epay,""$user"", public"
which is wrong.

So in order to retain all the benefits of transaction mode in pgbouncer I had 
to re-apply the first patch/hack, at the expense of the nuisance at pg_dumpall 
and search paths.




regards, tom lane



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Template0 datfrozenxid age is 160million and progressing

2018-08-01 Thread David Rowley
On 2 August 2018 at 02:31, Rijo Roy  wrote:
> I agree.. But unfortunately it is the business call and we cannot alter it..
> I am planning to convince them by keeping autovacuum_analyze_threshold to a
> high value so that auto analyse will not kick in very often leaving the
> autovacuum to do its job..
> Please advise.

If autovacuum is disabled, it'll still kick in for any anti-wraparound
work that needs to be performed.

This is also mentioned in the docs:

"Note that even when this parameter is disabled, the system will
launch autovacuum processes if necessary to prevent transaction ID
wraparound. See Section 24.1.5 for more information."

https://www.postgresql.org/docs/10/static/runtime-config-autovacuum.html

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Weird behaviour of the planner

2018-08-01 Thread Guillaume Lelarge
Hi,

While working on a slow query of one of our customers, I found this
behaviour which seems quite weird to me. Here it is:

(no changes on any parameter)
EXPLAIN (ANALYZE) SELECT DISTINCT * FROM gleu2;

   QUERY PLAN
-
 Unique  (cost=12005.97..12049.20 rows=1 width=1430) (actual
time=20055.294..20323.348 rows=1 loops=1)
   ->  Sort  (cost=12005.97..12006.30 rows=132 width=1430) (actual
time=20055.290..20105.738 rows=6 loops=1)
 Sort Key: (... 130 columns ...)
 Sort Method: external sort  Disk: 84464kB
 ->  Seq Scan on gleu2  (cost=0.00..12001.32 rows=132 width=1430)
(actual time=0.109..114.142 rows=6 loops=1)
 Planning time: 10.012 ms
 Execution time: 20337.854 ms
(7 rows)

That looks quite good. The total cost is 12049, so I expect this plan to
have the smaller cost as it's the choosen plan. Now, I'm disabling Sort,
and here is what I get:

SET enable_sort TO off;
EXPLAIN (ANALYZE) SELECT DISTINCT * FROM gleu2;

   QUERY PLAN
-
 HashAggregate  (cost=12044.22..12044.23 rows=1 width=1430) (actual
time=508.342..508.343 rows=1 loops=1)
   Group Key: (... 130 columns ...)
   ->  Seq Scan on gleu2  (cost=0.00..12001.32 rows=132 width=1430) (actual
time=0.036..57.088 rows=6 loops=1)
 Planning time: 1.335 ms
 Execution time: 508.529 ms
(5 rows)

I now have a Hash Aggregate. Sounds good, but I do have a smaller total
cost than the former plan. How is it possible that the planner chooses a
plan with a higher cost? (and a much much higher duration, but it can't
obviously know that).

Still working on my customer's issue, I found that it worked great before
commit 3fc6e2d7f5b652b417fa6937c34de2438d60fa9f (Make the upper part of the
planner work by generating and comparing Paths.) and not after. That kind
of patch is way out of my league to be honest :) But before that commit,
the Hash Aggregate is choosen right out of the box. And after that commit,
the Sort is choosen even with its higher cost.

Oh, BTW, yeah, I know. The row estimate is quite off... I'm not asking how
to make my query go faster, I'm just wondering why PostgreSQL chooses a
higher-cost plan :)

Thanks.


-- 
Guillaume.


Re: Settings for fast restores

2018-08-01 Thread Ron

On 08/01/2018 09:11 AM, Vick Khera wrote:
On Wed, Aug 1, 2018 at 2:03 AM, Ron > wrote:


Hi,

http://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html


shared_buffers = 1/2 of what you'd usually set
maintenance_work_mem = 1GB-2GB
wal_level = minimal
full_page_writes = off
wal_buffers = 64MB
checkpoint_segments = 256 or higher
max_wal_senders = 0
wal_keep_segments = 0

How many of these 4 year old setting recommendations are still valid
for 9.6?


They all look still valid to me. I personally also set fsync=off since I 
can always start over if the machine crashes and corrupts the data.


Right.  I didn't mention these, because they seem version-agnostic:

fsync = off
synchronous_commit = off
archive_mode = off
autovacuum = off
all activity logging settings disabled

--
Angular momentum makes the world go 'round.


Re: Template0 datfrozenxid age is 160million and progressing

2018-08-01 Thread Rijo Roy
I agree.. But unfortunately it is the business call and we cannot alter it.. I 
am planning to convince them by keeping autovacuum_analyze_threshold to a high 
value so that auto analyse will not kick in very often leaving the autovacuum 
to do its job.. Please advise. 
Thanks, Rijo Roy 
Sent from Yahoo Mail on Android 
 
  On Wed, 1 Aug 2018 at 7:54 pm, Tom Lane wrote:   Rijo Roy 
 writes:
> Thanks for the advice. Autovacuum is disabled in the environment and I cannot 
> take a call on enabling it back. I can only run manual vacuum on the database 
> as the dev team fears autoanalyze changing the query performance. Do you 
> still think we don't need to take any actions. 

IMO, the action you need to take is enabling autovacuum.  We've
seen many many people go down the path you are taking, and it's
generally led to no good in the end.  Manual vacuuming tends
to miss stuff, and it cannot react adequately to activity spikes.

            regards, tom lane  


Re: Template0 datfrozenxid age is 160million and progressing

2018-08-01 Thread Tom Lane
Rijo Roy  writes:
> Thanks for the advice. Autovacuum is disabled in the environment and I cannot 
> take a call on enabling it back. I can only run manual vacuum on the database 
> as the dev team fears autoanalyze changing the query performance. Do you 
> still think we don't need to take any actions. 

IMO, the action you need to take is enabling autovacuum.  We've
seen many many people go down the path you are taking, and it's
generally led to no good in the end.  Manual vacuuming tends
to miss stuff, and it cannot react adequately to activity spikes.

regards, tom lane



Re: Settings for fast restores

2018-08-01 Thread Vick Khera
On Wed, Aug 1, 2018 at 2:03 AM, Ron  wrote:

> Hi,
>
> http://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html
>
> shared_buffers = 1/2 of what you'd usually set
> maintenance_work_mem = 1GB-2GB
> wal_level = minimal
> full_page_writes = off
> wal_buffers = 64MB
> checkpoint_segments = 256 or higher
> max_wal_senders = 0
> wal_keep_segments = 0
>
> How many of these 4 year old setting recommendations are still valid for
> 9.6?
>

They all look still valid to me. I personally also set fsync=off since I
can always start over if the machine crashes and corrupts the data.


Re: Template0 datfrozenxid age is 160million and progressing

2018-08-01 Thread Tom Lane
Rijo Roy  writes:
> I am observing a steady increase in age(datfrozenxid) of template0 database 
> in my Postgresql 10 running in a RHEL 6.9Currently, it is at 166846989 and I 
> am not able to vacuum the same as datallowconn is false. Thought of setting 
> it to true and perform a vacuum freeze on the same. Before doing that, I 
> wanted to check whether I should be worrying for something here or can I just 
> go ahead with my plan. 

I would leave well enough alone.  Unless you've changed the default
values of autovacuum parameters, that behavior is perfectly normal
and not worrisome.  autovacuum should kick in and do something
about it at 200 million xacts (autovacuum_freeze_max_age).

regards, tom lane



Re: Adding terminal title support for psqlrc

2018-08-01 Thread Tom Lane
ik  writes:
> Is there a way to add terminal title information for psqlrc that cn display
> what schema I am, operation that is execute and stuff like that?

Most terminal programs recognize escape sequences to set the window
title.  So you could set values for PROMPTn that cause updates in
the title.  There's an example in

https://www.postgresql.org/docs/devel/static/app-psql.html#APP-PSQL-PROMPTING

although it's just talking about a color change.

regards, tom lane



Re: Adding terminal title support for psqlrc

2018-08-01 Thread Adrian Klaver

On 08/01/2018 01:52 AM, ik wrote:

Hello,

Is there a way to add terminal title information for psqlrc that cn 
display what schema I am, operation that is execute and stuff like that?


Are you referring to the title bar for something like an Konsole?

If so then no.

If not then what title are you referring to?



Thank you




--
Adrian Klaver
adrian.kla...@aklaver.com



Template0 datfrozenxid age is 160million and progressing

2018-08-01 Thread Rijo Roy
Hello Everyone, 
I am observing a steady increase in age(datfrozenxid) of template0 database in 
my Postgresql 10 running in a RHEL 6.9Currently, it is at 166846989 and I am 
not able to vacuum the same as datallowconn is false. Thought of setting it to 
true and perform a vacuum freeze on the same. Before doing that, I wanted to 
check whether I should be worrying for something here or can I just go ahead 
with my plan. 
Thanks, Rijo Roy 

Sent from Yahoo Mail on Android

Adding terminal title support for psqlrc

2018-08-01 Thread ik
Hello,

Is there a way to add terminal title information for psqlrc that cn display
what schema I am, operation that is execute and stuff like that?

Thank you


Re: Question on postgresql.conf

2018-08-01 Thread Steven Lembark
On Tue, 31 Jul 2018 00:02:47 -0500 (PET)
Alvaro Aguayo Garcia-Rada  wrote:

> Hi.
> 
> As far as I know, it's not currently possible. Maybe recompiling, but
> that could not be the best for production environment. I suppose this
> is this way becuase postgres is designed to expect a certain folder
> structure for it's data folder, and configuration files are
> considered part of it.
> 
> As you may see on the documentation:
> 
> https://www.postgresql.org/docs/9.6/static/app-postgres.html
> 
> You can change the data folder (-D command line option), but, no
> matter what the data folder is, it MUST have a postgresql.conf, as
> well as pg_hba.conf
> 
> However, some distros have made a similar appriach to what you may be
> looking to do. Don't remmeber which one, but there's a distro which
> it's stabdard postgresql server packages store configuration files
> at /etc/postgresql, while storing data at /var/lib/postgresql. This
> is done by simoly making a symlink, like "ln
> -s /mnt/shared/postgres/server1.conf /var/lib/postgres/data/postgresql.conf".
> Same applies to other conficuration files.

Normally the config files live on the server. If you want to 
hack the basename I'd suggest using symlinks for each machine.
Set up a whatever.conf. files you like, symlink them 
into of /etc/posgresql (or whatever you use on Windows):

e.g., 

#!/usr/bin/env bash

[ -z "$HOST_CONF_DIR"   ] || exit -1;
[ -d  $HOST_CONF_DIR] || exit -2;
[ -r  $HOST_CONF_DIR] || exit -3;

suffix=".$(hostname)";

cd /etc/postgresql;

for i in $HOST_CONF_DIR/*;
do
ln -fsv $i ./$(basename $i $suffix);
done

echo 'Config dir:';
ls -al $PWD;

exit 0;

Result: You have the standard paths where PG expexts them and 
ls -al (or readlink) will tell you which host they were generated
for.

-- 
Steven Lembark   1505 National Ave
Workhorse Computing Rockford, IL 61103
lemb...@wrkhors.com+1 888 359 3508



Settings for fast restores

2018-08-01 Thread Ron

Hi,

http://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html

shared_buffers = 1/2 of what you'd usually set
maintenance_work_mem = 1GB-2GB
wal_level = minimal
full_page_writes = off
wal_buffers = 64MB
checkpoint_segments = 256 or higher
max_wal_senders = 0
wal_keep_segments = 0

How many of these 4 year old setting recommendations are still valid for 9.6?

Thanks

--
Angular momentum makes the world go 'round.