Re: Highly academic: local etcd & Patroni Cluster for testing on a single host

2020-02-25 Thread Paul Förster
Hi Ian,

> On 26. Feb, 2020, at 01:38, Ian Barwick  wrote:
> 
> Assuming the standby/replica is created using pg_basebackup, you can use the
> -T/--tablespace-mapping option to remap the tablespace directories.

no, with Patroni, replicas are always initiated by Patroni. Patroni copies the 
whole PGDATA including everything (postgresql.conf, etc.) in it to the replica 
site. When launching Patroni for the first time, all you need is its yaml 
configuration file and an empty PGDATA. It then will copy the whole master's 
PGDATA as is, launch the replica database cluster and start replication.

Even if Patroni uses pg_basebackup internally (which I assume it does), there 
is no way to pass parameters to it.

Then you can stop the Patroni process on the replica site which in turn takes 
the replica database cluster down, make some configuration changes and launch 
it again. You can of course only make changes to things which don't get 
replicated all over again or are managed by Patroni itself. This is, how I set 
up individual archive destinations for each replication member because the 
initial archive destination of course is replicated, and thus identical, when 
Patroni builds the replica database cluster.

Tablespace mapping just creates the links to the directories in 
${PGDATA}/pg_tblspc to a different location. And since pg_basebackup isn't 
used, there is no way to do that. But I can do that by hand. That is not the 
problem.

The problem is that PostgreSQL keeps the tablespace location inside the 
database and not in some config file. If the latter would be the case then I 
could just as well set it individually per node as I can with the archive 
destination.

So, the tablespace location is always /data/pg01a/ts, even on the replica site 
where it should be /data/pg01b/ts. Hence, on my local cluster, the replica site 
'b' always uses the tablespace directory (and thus the files) of the master 'a'.

Cheers,
Paul



Re: Backup & Restore

2020-02-25 Thread Thomas Kellerer
sivapostg...@yahoo.com schrieb am 25.02.2020 um 02:55:
> Can u suggest a good backup solution for a windows installation ?
> Looks like the suggested two [ pgbarman, pgbackrest ] works only in
> Linux.
pg_probackup provides Windows binaries: 
https://github.com/postgrespro/pg_probackup/






Re: Backup & Restore

2020-02-25 Thread sivapostg...@yahoo.com
 We do have plans to move to Linux in the future after the successful 
implementation of at least 4 or 5 projects.  Till then we want to keep windows. 
We were (are) using SQL Server (also) and this is our first one with Postgres.  
 With our manpower, we feel tough to switch two things (Database & OS) at a 
time.  
We'll be using either pg_basebackup or pg_dump, as suitable, till we find a 
good backup solution.

On Tuesday, 25 February, 2020, 07:24:00 pm IST, Stephen Frost 
 wrote:  
 
 Greetings,

* sivapostg...@yahoo.com (sivapostg...@yahoo.com) wrote:
>  HiCan u suggest a good backup solution for a windows installation ?  Looks 
>like the suggested two [ pgbarman, pgbackrest ] works only in Linux.

While it's certainly something we'd like to do, we haven't ported
pgbackrest to Windows yet.  That said, it's now entirely written in
reasonably portable C and so it shouldn't be too much effort to port it.

Until that's done though, and I can't say exactly when that port will
happen, your best option is probably pg_basebackup.

Of course, I'd strongly recommend you consider running PG on Linux
instead, particularly for a production environment.

Thanks,

Stephen  

Re: Backup & Restore

2020-02-25 Thread Adrian Ho
On 24/2/20 4:18 pm, Dor Ben Dov wrote:
>
> Hi All,
>
>  
>
> What is your backup and restore solution in production when working
> with Postgres ?
>
> (+ if you can say few words why you picked this X solution instead of
> others)
>
This is the THIRD time you've asked the same question with minimal
rephrasing, and without clarifying details:

https://www.postgresql.org/message-id/AM0PR06MB4817A7035134FD88B2C1D033CC550%40AM0PR06MB4817.eurprd06.prod.outlook.com

https://www.postgresql.org/message-id/AM0PR06MB4817DD9C64FE02410478DF65CC280%40AM0PR06MB4817.eurprd06.prod.outlook.com

If you weren't satisfied with the answers you got the first two times,
kindly be clear about what you're really looking for. Thanks much!

-- 
Best Regards,
Adrian



Re: a proposal for a new functionality: "SELECT * [EXCEPT col1 [,col2]]

2020-02-25 Thread Pavel Stehule
út 25. 2. 2020 v 22:14 odesílatel Tom Lane  napsal:

> Paul Jungwirth  writes:
> > Not that this is necessarily fatal, but you'd need to avoid parsing
> > trouble with the other EXCEPT, e.g.
> > SELECT 1 EXCEPT SELECT 1;
>
> Yeah, it doesn't sound like much consideration has been given to
> that ambiguity, but it's a big problem if you want to use a syntax
> like this.
>
> > Google Big Query was mentioned upthread. I see they require parens, e.g.
> > SELECT ... EXCEPT (...). I don't think that actually fixes the ambiguity
> > though.
>
> Indeed it doesn't, because you can parenthesize an EXCEPT's sub-queries:
>
> regression=# select 1 except (select 2);
>  ?column?
> --
> 1
> (1 row)
>
> In principle, once you got to the SELECT keyword you could tell things
> apart, but I'm afraid that might be too late for a Bison-based parser.
>
> > So it seems they require at least one `*` in the SELECT target list. In
> > fact the `*` must be the very last thing. Personally I think it should
> > be as general as possible and work even without a `*` (let alone caring
> > about its position).
>
> I wonder if they aren't thinking of the EXCEPT as annotating the '*'
> rather than the whole SELECT list.  That seems potentially more flexible,
> not less so.  Consider
>
> SELECT t1.* EXCEPT (foo, bar), t2.* EXCEPT (baz) ... FROM t1, t2, ...
>
> This doesn't have any problem with ambiguity if t2 has a "foo" column,
> or if t1 has a "baz" column; which indeed would be cases where this
> sort of ability would be pretty useful, since otherwise you end up
> with painful-to-rename duplicate output column names.  And certainly
> there is no particular need for this construct if you didn't write
> a "*".
>

this proposal looks well

Pavel


> regards, tom lane
>
>
>


Re: LDAP with TLS is taking more time in Postgresql 11.5

2020-02-25 Thread Thomas Munro
On Wed, Feb 26, 2020 at 7:37 AM Adrian Klaver  wrote:
> On 2/25/20 10:23 AM, Mani Sankar wrote:
> > Hi Adrian,
> >
> > Both the machines are in same network and both are pointing towards the
> > same LDAP server
>
> I don't see any errors in the Postgres logs.
>
> You probably should take a look at the LDAP server logs to see if there
> is anything there.
>
> You could also turn up the logging detail in Postgres to see if it
> reveals anything.

A couple more ideas:

If you take PostgreSQL out of the picture and run the equivalent LDAP
queries with the ldapsearch command line tool, do you see the same
difference in response time?  If so, I'd trace that with strace etc
with timings to see where the time is spent -- for example, is it
simply waiting for a response from the LDAP (AD?) server?   If not,
I'd try tracing the PostgreSQL process and looking at the system calls
(strace -tt -T for high res times and elapsed times), perhaps using
PostgreSQL's pre_auth_delay setting to get time to attach strace.

A wild stab in the dark: if it's slow from one computer and not from
another, perhaps the problem has something to do with a variation in
reverse DNS lookup speed on the LDAP server side when it's verifying
the certificate.  Or something like that.




Re: Highly academic: local etcd & Patroni Cluster for testing on a single host

2020-02-25 Thread Ian Barwick

On 2020/02/26 0:41, Paul Förster wrote:

Hi,

I have set up an etcd & Patroni cluster on a single machine for testing 
purposes as follows:

/data/pg01a/db as data directory for the first "node"
/data/pg01b/db as data directory for the second "node"

I have set up Patroni to make each PostgreSQL database cluster archive to its 
own destination, so they won't interfere with each other. All is well and 
working fine so far. Failover/Switchover works, they are syncing properly, etc.

Now:

Inside /data/pg01a and /data/pg01b I also have a directory ts, i.e. 
/data/pg01a/ts and /data/pg01b/ts to simulate different filesystems.

If I do a 'create tablespace' and specify its location, I must of course 
specify either /data/pg01a/ts or /data/pg01b/ts.

Files with Tables get created as usual, but since it's a replication (sync, by 
the way), the other tablespace directory will not hold any files, but instead, 
the replica uses the same files as does the master.

Is there any way around that, i.e. make the replica use its own files? I found 
a way to make each archive destination individual despite Patroni, but I can't 
seem to find a way to do something similar to the tablespaces. Even correcting 
the symlinks in ${PGDATA}/pg_tblspc manually (and then restart) does not work.

The explanation is simple: the location is stored inside the database cluster information 
and is per definition the same across all nodes. So, if 'a' is master and I do 
"create tablespace test location '/data/pg01a/ts'" it creates files there with 
the replica 'b' using the *same* destination.

Still, is there a way to make each node store that information individually? I 
know, this is highly academic but I only have a single host to do etcd & 
Patroni cluster experiments on.


Assuming the standby/replica is created using pg_basebackup, you can use the
-T/--tablespace-mapping option to remap the tablespace directories.


Regards

Ian Barwick

--
Ian Barwick   https://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services




Re: a proposal for a new functionality: "SELECT * [EXCEPT col1 [,col2]]

2020-02-25 Thread Rob Sargent




> On Feb 25, 2020, at 2:14 PM, Tom Lane  wrote:
> 
> Paul Jungwirth  writes:
>> Not that this is necessarily fatal, but you'd need to avoid parsing 
>> trouble with the other EXCEPT, e.g.
>> SELECT 1 EXCEPT SELECT 1;
> 
> Yeah, it doesn't sound like much consideration has been given to
> that ambiguity, but it's a big problem if you want to use a syntax
> like this.
> 
>> Google Big Query was mentioned upthread. I see they require parens, e.g. 
>> SELECT ... EXCEPT (...). I don't think that actually fixes the ambiguity 
>> though.
> 
> Indeed it doesn't, because you can parenthesize an EXCEPT's sub-queries:
> 
> regression=# select 1 except (select 2);
> ?column? 
> --
>1
> (1 row)
> 
> In principle, once you got to the SELECT keyword you could tell things
> apart, but I'm afraid that might be too late for a Bison-based parser.
> 
>> So it seems they require at least one `*` in the SELECT target list. In 
>> fact the `*` must be the very last thing. Personally I think it should 
>> be as general as possible and work even without a `*` (let alone caring 
>> about its position).
> 
> I wonder if they aren't thinking of the EXCEPT as annotating the '*'
> rather than the whole SELECT list.  That seems potentially more flexible,
> not less so.  Consider
> 
> SELECT t1.* EXCEPT (foo, bar), t2.* EXCEPT (baz) ... FROM t1, t2, ...
> 
> This doesn't have any problem with ambiguity if t2 has a "foo" column,
> or if t1 has a "baz" column; which indeed would be cases where this
> sort of ability would be pretty useful, since otherwise you end up
> with painful-to-rename duplicate output column names.  And certainly
> there is no particular need for this construct if you didn't write
> a "*".
> 
>regards, tom lane
> 

OMIT rather than EXCEPT?

> 




Re: a proposal for a new functionality: "SELECT * [EXCEPT col1 [,col2]]

2020-02-25 Thread Tom Lane
Paul Jungwirth  writes:
> Not that this is necessarily fatal, but you'd need to avoid parsing 
> trouble with the other EXCEPT, e.g.
> SELECT 1 EXCEPT SELECT 1;

Yeah, it doesn't sound like much consideration has been given to
that ambiguity, but it's a big problem if you want to use a syntax
like this.

> Google Big Query was mentioned upthread. I see they require parens, e.g. 
> SELECT ... EXCEPT (...). I don't think that actually fixes the ambiguity 
> though.

Indeed it doesn't, because you can parenthesize an EXCEPT's sub-queries:

regression=# select 1 except (select 2);
 ?column? 
--
1
(1 row)

In principle, once you got to the SELECT keyword you could tell things
apart, but I'm afraid that might be too late for a Bison-based parser.

> So it seems they require at least one `*` in the SELECT target list. In 
> fact the `*` must be the very last thing. Personally I think it should 
> be as general as possible and work even without a `*` (let alone caring 
> about its position).

I wonder if they aren't thinking of the EXCEPT as annotating the '*'
rather than the whole SELECT list.  That seems potentially more flexible,
not less so.  Consider

SELECT t1.* EXCEPT (foo, bar), t2.* EXCEPT (baz) ... FROM t1, t2, ...

This doesn't have any problem with ambiguity if t2 has a "foo" column,
or if t1 has a "baz" column; which indeed would be cases where this
sort of ability would be pretty useful, since otherwise you end up
with painful-to-rename duplicate output column names.  And certainly
there is no particular need for this construct if you didn't write
a "*".

regards, tom lane




Re: a proposal for a new functionality: "SELECT * [EXCEPT col1 [,col2]]

2020-02-25 Thread Paul Jungwirth

On 2/25/20 11:46 AM, Stanislav Motycka wrote:



Dňa 25. 2. 2020 o 20:28 Paul A Jungwirth napísal(a):

I take the proposal to mean this:

SELECT listOfColumns [EXCEPT listOfColumns] FROM ...
Exactly, simply exclude unneeded columns from the base clause "SELECT", 
nothing more ..


Not that this is necessarily fatal, but you'd need to avoid parsing 
trouble with the other EXCEPT, e.g.


SELECT 1 EXCEPT SELECT 1;

Believe it or not these are valid SQL:

SELECT;
SELECT EXCEPT SELECT;

This fails today but only because of the different number of columns:

SELECT 1 AS SELECT EXCEPT SELECT;

So the parser understands it as selectQuery EXCEPT selectQuery, but you 
can see how it could also be parsable as this new structure. So the 
parser would have to decide which is meant (if that's even possible at 
that early stage).


I guess as soon as you exclude two columns it is unambiguous though 
because of this comma: SELECT ... EXCEPT SELECT,  And anyway I think 
for such a pathological case you could just tell people to add double 
quotes.


Google Big Query was mentioned upthread. I see they require parens, e.g. 
SELECT ... EXCEPT (...). I don't think that actually fixes the ambiguity 
though. Also a few other notes (after very brief testing):


SELECT * EXCEPT (foo) FROM t;   -- works
SELECT * EXCEPT (foo, bar) FROM t;  -- works
SELECT t.* EXCEPT (foo) FROM t; -- works
SELECT * EXCEPT foo FROM t; -- fails
SELECT foo, bar EXCEPT (foo) FROM t;-- fails
SELECT t1.foo, t2.* EXCEPT (foo) FROM t1 JOIN t2 ON ...; -- works
SELECT t2.*, t1.foo EXCEPT (foo) FROM t1 JOIN t2 ON ...; -- fails!

So it seems they require at least one `*` in the SELECT target list. In 
fact the `*` must be the very last thing. Personally I think it should 
be as general as possible and work even without a `*` (let alone caring 
about its position).


Regards,

--
Paul  ~{:-)
p...@illuminatedcomputing.com




Re: Connections dropping while using Postgres backend DB with Ejabberd

2020-02-25 Thread Dipanjan Ganguly
Hi Justin,
I have already checked running Postgres processes and strangely never
counted more than 20.

 I'll check as you recommend on how ejabberd to postgresql connectivity
works. May be the answer lies there. Will get back if I find something.

Thanks for giving some direction to my thoughts.

Good talk. 

BR,
Dipanjan


On Wed 26 Feb, 2020 1:05 am Justin,  wrote:

> Hi Dipanjan
>
> If the connections are not being closed and left open ,  you should see
> 50,000 processes running on the server because postgresql creates/forks a
> new process for each connection
>
> Just having that many processes running will  exhaust resources,  I would
> confirm that the process are still running.
> you can use the command
>
> ps aux |wc -l
>
> to get a count on the number of processes
> Beyond just opening the connection are there any actions such as Select *
> from sometable being fired off to measure performance?
>
> Attempting to open and leave 50K connections open should exhaust the
> server resources long before reaching 50K
>
> Something is off here I would be looking into how this test actually
> works, how the connections are opened, and commands it sends to Postgresql
>
>
>
> On Tue, Feb 25, 2020 at 2:12 PM Dipanjan Ganguly 
> wrote:
>
>> Hi Justin,
>>
>> Thanks for your insight.
>>
>> I agree with you completely, but as mentioned in my previous email, the
>> fact that Postgres server resource utilization is less *"( Avg load 1,
>> Highest Cpu utilization 26%, lowest freemem  9000)*" and it recovers at
>> a certain point then consistently reaches close to 50 k , is what confusing
>> me..
>>
>> Legends from the Tsung report:
>> users
>> Number of simultaneous users (it's session has started, but not yet
>> finished).connectednumber of users with an opened TCP/UDP connection
>> (example: for HTTP, during a think time, the TCP connection can be closed
>> by the server, and it won't be reopened until the thinktime has expired)
>> I have also used pgcluu to monitor the events. Sharing the stats 
>> below..*Memory
>> information*
>>
>>- 15.29 GB Total memory
>>- 8.79 GB Free memory
>>- 31.70 MB Buffers
>>- 5.63 GB Cached
>>- 953.12 MB Total swap
>>- 953.12 MB Free swap
>>- 13.30 MB Page Tables
>>- 3.19 GB Shared memory
>>
>> Any thoughts ??!!  樂樂
>>
>> Thanks,
>> Dipanjan
>>
>>
>> On Tue, Feb 25, 2020 at 10:31 PM Justin  wrote:
>>
>>> Hi Dipanjan
>>>
>>> Please do not post to all the postgresql mailing list lets keep this on
>>> one list at a time,  Keep this on general list
>>>
>>> Am i reading this correctly 10,000 to 50,000 open connections.
>>> Postgresql really is not meant to serve that many open connections.
>>> Due to design of Postgresql  each client connection can use up to the
>>> work_mem of 256MB plus  additional for parallel processes.  Memory will be
>>> exhausted long before 50, connections is reached
>>>
>>> I'm not surprised Postgresql and the server is showing issues long
>>> before 10K connections is reached.  The OS is probably throwing everything
>>> to the swap file and see connections dropped or time out.
>>>
>>> Should be using a connection pooler  to service this kind of load so the
>>> Postgresql does not exhaust resources just from the open connections.
>>> https://www.pgbouncer.org/
>>>
>>>
>>> On Tue, Feb 25, 2020 at 11:29 AM Dipanjan Ganguly 
>>> wrote:
>>>
 Greetings,

 I was trying to use postgresql database as a backend with Ejabberd XMPP
 server for load test (Using TSUNG).

 Noticed, while using Mnesia the  “simultaneous users and open TCP/UDP
 connections”  graph in Tsung report is showing consistency, but while using
 Postgres, we see drop in connections during 100 to 500 seconds of runtime,
 and then recovering and staying consistent.

 I have been trying to figure out what the issue could be without any
 success. I am kind of a noob in this technology, and hoping for some help
 from the good people from the community to understand the problem and how
 to fix this. Below are some details..

 · Postgres server utilization is low ( Avg load 1, Highest Cpu
 utilization 26%, lowest freemem  9000)



 Tsung  graph:
 [image: image.png]
Graph 1: Postgres 12 Backen
 [image: image.png]

   Graph 2: Mnesia backend


 · Ejabberd Server: Ubuntu 16.04, 16 GB ram, 4 core CPU.

 · Postgres on remote server: same config

 · Errors encountered during the same time:
  error_connect_etimedout (same outcome for other 2 tests)

 · *Tsung Load:  *512 Bytes message size, user arrival rate
 50/s, 80k registered users.

 · Postgres server utilization is low ( Avg load 1, Highest Cpu
 utilization 26%, lowest freemem  9000)

 · Same tsung.xm and userlist used for the tests in Mnesia and
 

Re: a proposal for a new functionality: "SELECT * [EXCEPT col1 [,col2]]

2020-02-25 Thread Alvaro Herrera
On 2020-Feb-25, Stanislav Motyčka wrote:

> Sometimes (for tables with many columns) it would be better and easier
> to write "SELECT" statement with clause "EXCEPT":
> "SELECT * [EXCEPT col1 [,col2]] FROM ..."

I think an important initial question is how do other database systems
implement this functionality, if they do, and what syntax do they offer.
>From there we can move on to the conversation of where is the ISO SQL
committee going about this.

I think it's good to extend the standard to some extent, but it would
not do to have it extended in a direction that ends up contrary to what
they pursue in the future.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: a proposal for a new functionality: "SELECT * [EXCEPT col1 [,col2]]

2020-02-25 Thread Stanislav Motycka



Dňa 25. 2. 2020 o 20:28 Paul A Jungwirth napísal(a):

I take the proposal to mean this:

SELECT listOfColumns [EXCEPT listOfColumns] FROM ...
Exactly, simply exclude unneeded columns from the base clause "SELECT", 
nothing more ..




Re: a proposal for a new functionality: "SELECT * [EXCEPT col1 [,col2]]

2020-02-25 Thread Guyren Howe


> On Feb 25, 2020, at 11:28 , Paul A Jungwirth  
> wrote:
> 
> On Tue, Feb 25, 2020 at 2:18 AM Stanislav Motyčka
> mailto:stanislav.moty...@gmail.com>> wrote:
>> Sometimes (for tables with many columns) it would be better and easier to 
>> write "SELECT" statement with clause "EXCEPT":
>> "SELECT * [EXCEPT col1 [,col2]] FROM …"

The single biggest benefit is that I could write many more views that don’t 
need to be changed when I change the underlying table.





Re: Connections dropping while using Postgres backend DB with Ejabberd

2020-02-25 Thread Justin
Hi Dipanjan

If the connections are not being closed and left open ,  you should see
50,000 processes running on the server because postgresql creates/forks a
new process for each connection

Just having that many processes running will  exhaust resources,  I would
confirm that the process are still running.
you can use the command

ps aux |wc -l

to get a count on the number of processes
Beyond just opening the connection are there any actions such as Select *
from sometable being fired off to measure performance?

Attempting to open and leave 50K connections open should exhaust the server
resources long before reaching 50K

Something is off here I would be looking into how this test actually works,
how the connections are opened, and commands it sends to Postgresql



On Tue, Feb 25, 2020 at 2:12 PM Dipanjan Ganguly 
wrote:

> Hi Justin,
>
> Thanks for your insight.
>
> I agree with you completely, but as mentioned in my previous email, the
> fact that Postgres server resource utilization is less *"( Avg load 1,
> Highest Cpu utilization 26%, lowest freemem  9000)*" and it recovers at a
> certain point then consistently reaches close to 50 k , is what confusing
> me..
>
> Legends from the Tsung report:
> users
> Number of simultaneous users (it's session has started, but not yet
> finished).connectednumber of users with an opened TCP/UDP connection
> (example: for HTTP, during a think time, the TCP connection can be closed
> by the server, and it won't be reopened until the thinktime has expired)
> I have also used pgcluu to monitor the events. Sharing the stats 
> below..*Memory
> information*
>
>- 15.29 GB Total memory
>- 8.79 GB Free memory
>- 31.70 MB Buffers
>- 5.63 GB Cached
>- 953.12 MB Total swap
>- 953.12 MB Free swap
>- 13.30 MB Page Tables
>- 3.19 GB Shared memory
>
> Any thoughts ??!!  樂樂
>
> Thanks,
> Dipanjan
>
>
> On Tue, Feb 25, 2020 at 10:31 PM Justin  wrote:
>
>> Hi Dipanjan
>>
>> Please do not post to all the postgresql mailing list lets keep this on
>> one list at a time,  Keep this on general list
>>
>> Am i reading this correctly 10,000 to 50,000 open connections.
>> Postgresql really is not meant to serve that many open connections.
>> Due to design of Postgresql  each client connection can use up to the
>> work_mem of 256MB plus  additional for parallel processes.  Memory will be
>> exhausted long before 50, connections is reached
>>
>> I'm not surprised Postgresql and the server is showing issues long before
>> 10K connections is reached.  The OS is probably throwing everything to the
>> swap file and see connections dropped or time out.
>>
>> Should be using a connection pooler  to service this kind of load so the
>> Postgresql does not exhaust resources just from the open connections.
>> https://www.pgbouncer.org/
>>
>>
>> On Tue, Feb 25, 2020 at 11:29 AM Dipanjan Ganguly 
>> wrote:
>>
>>> Greetings,
>>>
>>> I was trying to use postgresql database as a backend with Ejabberd XMPP
>>> server for load test (Using TSUNG).
>>>
>>> Noticed, while using Mnesia the  “simultaneous users and open TCP/UDP
>>> connections”  graph in Tsung report is showing consistency, but while using
>>> Postgres, we see drop in connections during 100 to 500 seconds of runtime,
>>> and then recovering and staying consistent.
>>>
>>> I have been trying to figure out what the issue could be without any
>>> success. I am kind of a noob in this technology, and hoping for some help
>>> from the good people from the community to understand the problem and how
>>> to fix this. Below are some details..
>>>
>>> · Postgres server utilization is low ( Avg load 1, Highest Cpu
>>> utilization 26%, lowest freemem  9000)
>>>
>>>
>>>
>>> Tsung  graph:
>>> [image: image.png]
>>>Graph 1: Postgres 12 Backen
>>> [image: image.png]
>>>
>>>   Graph 2: Mnesia backend
>>>
>>>
>>> · Ejabberd Server: Ubuntu 16.04, 16 GB ram, 4 core CPU.
>>>
>>> · Postgres on remote server: same config
>>>
>>> · Errors encountered during the same time:
>>>  error_connect_etimedout (same outcome for other 2 tests)
>>>
>>> · *Tsung Load:  *512 Bytes message size, user arrival rate
>>> 50/s, 80k registered users.
>>>
>>> · Postgres server utilization is low ( Avg load 1, Highest Cpu
>>> utilization 26%, lowest freemem  9000)
>>>
>>> · Same tsung.xm and userlist used for the tests in Mnesia and
>>> Postgres.
>>>
>>> *Postgres Configuration used:*
>>> shared_buffers = 4GB
>>> effective_cache_size = 12GB
>>> maintenance_work_mem = 1GB
>>> checkpoint_completion_target = 0.9
>>> wal_buffers = 16MB
>>> default_statistics_target = 100
>>> random_page_cost = 4
>>> effective_io_concurrency = 2
>>> work_mem = 256MB
>>> min_wal_size = 1GB
>>> max_wal_size = 2GB
>>> max_worker_processes = 4
>>> max_parallel_workers_per_gather = 2
>>> max_parallel_workers = 4
>>> max_parallel_maintenance_workers = 2
>>> max_connections=5

Re: a proposal for a new functionality: "SELECT * [EXCEPT col1 [,col2]]

2020-02-25 Thread Paul A Jungwirth
On Tue, Feb 25, 2020 at 2:18 AM Stanislav Motyčka
 wrote:
> Sometimes (for tables with many columns) it would be better and easier to 
> write "SELECT" statement with clause "EXCEPT":
> "SELECT * [EXCEPT col1 [,col2]] FROM ..."

I've wanted this feature lots of times and would be delighted to see
it in Postgres.

On Tue, Feb 25, 2020 at 6:51 AM Miles Elam  wrote:
> Do you mean
>   "select everything from tablex except for tablex.col1, and also select 
> tablex.col2 and tabley.col1"
> or
>   "select everything from tablex except for tablex.col1 AND tablex.col2, and 
> also select tabley.col1"
> ?

I take the proposal to mean this:

SELECT listOfColumns [EXCEPT listOfColumns] FROM ...

not this:

SELECT listOfColumns [EXCEPT (listOfColumns) [listOfColumns [EXCEPT
(listOfColumns)]]]... FROM ...

So there is always a single EXCEPT clause (if any) and it comes after
the entire SELECT clause. Then there is no ambiguity. Also this
approach makes the feature easy to understand and use. I don't see any
benefit to letting people interleave selected & excepted columns.

Regards,
Paul




Re: Connections dropping while using Postgres backend DB with Ejabberd

2020-02-25 Thread Dipanjan Ganguly
Thanks Michael for the recommendation and clarification.

Will try the with 32 MB on my next run.

BR,
Dipanjan

On Tue, Feb 25, 2020 at 10:51 PM Michael Lewis  wrote:

> work_mem can be used many times per connection given it is per sort, hash,
> or other operations and as mentioned that can be multiplied if the query is
> handled with parallel workers. I am guessing the server has 16GB memory
> total given shared_buffers and effective_cache_size, and a more reasonable
> work_mem setting might be on the order of 32-64MB.
>
> Depending on the type of work being done and how quickly the application
> releases the db connection once it is done, max connections might be on the
> order of 4-20x the number of cores I would expect. If more simultaneous
> users need to be serviced, a connection pooler like pgbouncer or pgpool
> will allow those connections to be re-used quickly.
>
> These numbers are generalizations based on my experience. Others with more
> experience may have different configurations to recommend.
>
>>


Re: LDAP with TLS is taking more time in Postgresql 11.5

2020-02-25 Thread Adrian Klaver

On 2/25/20 10:23 AM, Mani Sankar wrote:

Hi Adrian,

Both the machines are in same network and both are pointing towards the 
same LDAP server


I don't see any errors in the Postgres logs.

You probably should take a look at the LDAP server logs to see if there 
is anything there.


You could also turn up the logging detail in Postgres to see if it 
reveals anything.




Regards,
Mani.




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: LDAP with TLS is taking more time in Postgresql 11.5

2020-02-25 Thread Mani Sankar
Hi Adrian,

Both the machines are in same network and both are pointing towards the
same LDAP server

Regards,
Mani.

On Tue, 25 Feb, 2020, 11:48 pm Adrian Klaver, 
wrote:

> On 2/25/20 10:08 AM, Mani Sankar wrote:
> > Hi Adrian,
> >
> > Should I want to try this configuration?
>
> I thought you where already using this configuration?
>
> Are the 9.4 and 11.5 instances are on the same machine and/or network?
>
> In other words is ldapserver=XXX pointing at the same thing?
>
>
> >
> > Regards,
> > Mani.
> >
> > On Tue, 25 Feb, 2020, 9:24 pm Adrian Klaver,  > > wrote:
> >
> > On 2/24/20 9:07 PM, Mani Sankar wrote:
> > Please reply to list also.
> > Ccing list.
> >  > Hi Adrian,
> >  >
> >  > Thanks for replying. Below are the requested details.
> >  >
> >  >  Configuration in 9.4 PG Version
> >  >
> >  > local all all ldap ldapserver=XX ldapport=3268
> >  > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
> >  >
> >  > host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXX
> >  > ldapport=3268 ldapprefix="ADS\" ldapsuffix="" ldaptls=1
> >  >
> >  > host all someuser ::1/128 ldap ldapserver=XXX
> > ldapport=3268
> >  > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
> >  >
> >  > host all all 0.0.0.0/0   ldap
> >  > ldapserver=XXX ldapport=3268 ldapprefix="ADS\"
> > ldapsuffix=""
> >  > ldaptls=1
> >  >
> >  > host all all ::1/128 ldap ldapserver=XXX ldapport=3268
> >  > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
> >  >
> >  > host replication someuser 0.0.0.0/0 
> >  ldap
> >  > ldapserver=XXX ldapport=3268 ldapprefix="ADS\"
> > ldapsuffix=""
> >  > ldaptls=1
> >  >
> >  > host replication someuser 0.0.0.0/0 
> >  ldap
> >  > ldapserver=XXX ldapport=3268 ldapprefix="ADS\"
> > ldapsuffix=""
> >  > ldaptls=1
> >  >
> >  >  Configuration in 11.5 Version.
> >  >
> >  > local all all ldap ldapserver=XXX ldapport=3268
> >  > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
> >  >
> >  > host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXX
> >  > ldapport=3268 ldapprefix="ADS\" ldapsuffix="" ldaptls=1
> >  >
> >  > host all someuser ::1/128 ldap ldapserver=XXX
> > ldapport=3268
> >  > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
> >  >
> >  > host all all 0.0.0.0/0   ldap
> >  > ldapserver=XXX ldapport=3268 ldapprefix="ADS\"
> > ldapsuffix=""
> >  > ldaptls=1
> >  >
> >  > host all all ::1/128 ldap ldapserver=XXX ldapport=3268
> >  > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
> >  >
> >  > host replication someuser 0.0.0.0/0 
> >  ldap
> >  > ldapserver=XXX ldapport=3268 ldapprefix="ADS\"
> > ldapsuffix=""
> >  > ldaptls=1
> >  >
> >  > host replication someuser 0.0.0.0/0 
> >  ldap
> >  > ldapserver=XXX ldapport=3268 ldapprefix="ADS\"
> > ldapsuffix=""
> >  > ldaptls=1
> >  >
> >  > host replication someuser 0.0.0.0/0 
> >  ldap
> >  > ldapserver=XXX ldapport=3268 ldapprefix="ADS\"
> > ldapsuffix=""
> >  > ldaptls=1
> >  >
> >  > hostreplication replicator  X/22md5
> >  >
> >  > hostreplication replicator  1/22md5
> >  >
> >  > Linux Version: Red Hat Enterprise Linux Server release 6.10
> > (Santiago)
> >  >
> >  > Server Installation is Source code installation. Custom build for
> > our
> >  > environment.
> >  >
> >  > Authentication logs from PG 11.5:
> >  >
> >  > 2020-02-24 00:00:15 MST [25089]:
> >  >
> >
>  
> application=[unknown],host=xx.xx.xxx.xx(55742),user=[unknown],db=[unknown],state=0
> >
> >  > LOG:  connection received: host=xx.xx.xxx.xx port=55742
> >  >
> >  > 2020-02-24 00:00:16 MST [25090]:
> >  >
> >
>  
> application=[unknown],host=xx.xx.xxx.xx(55748),user=[unknown],db=[unknown],state=0
> >
> >  > LOG:  connection received: host=xx.xx.xxx.xx port=55748
> >  >
> >  > 2020-02-24 00:00:16 MST [25092]:
> >  >
> >
>  
> application=[unknown],host=xx.xx.xxx.xx(55765),user=[unknown],db=[unknown],state=0
> >
> >  > LOG:  connection received: host=xx.xx.xxx.xx port=55765
> >  >
> >  > 2020-02-24 00:00:16 MST [25093]:
> >  >
> >
>  
> application=[unknown],host=xx.xx.xxx.xx(55770),user=[unknown],db=[unknown],state=0
> >
> >  > LOG:  connection 

Re: LDAP with TLS is taking more time in Postgresql 11.5

2020-02-25 Thread Adrian Klaver

On 2/25/20 10:08 AM, Mani Sankar wrote:

Hi Adrian,

Should I want to try this configuration?


I thought you where already using this configuration?

Are the 9.4 and 11.5 instances are on the same machine and/or network?

In other words is ldapserver=XXX pointing at the same thing?




Regards,
Mani.

On Tue, 25 Feb, 2020, 9:24 pm Adrian Klaver, > wrote:


On 2/24/20 9:07 PM, Mani Sankar wrote:
Please reply to list also.
Ccing list.
 > Hi Adrian,
 >
 > Thanks for replying. Below are the requested details.
 >
 >  Configuration in 9.4 PG Version
 >
 > local all all ldap ldapserver=XX ldapport=3268
 > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
 >
 > host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXX
 > ldapport=3268 ldapprefix="ADS\" ldapsuffix="" ldaptls=1
 >
 > host all someuser ::1/128 ldap ldapserver=XXX
ldapport=3268
 > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
 >
 > host all all 0.0.0.0/0   ldap
 > ldapserver=XXX ldapport=3268 ldapprefix="ADS\"
ldapsuffix=""
 > ldaptls=1
 >
 > host all all ::1/128 ldap ldapserver=XXX ldapport=3268
 > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
 >
 > host replication someuser 0.0.0.0/0 
 ldap
 > ldapserver=XXX ldapport=3268 ldapprefix="ADS\"
ldapsuffix=""
 > ldaptls=1
 >
 > host replication someuser 0.0.0.0/0 
 ldap
 > ldapserver=XXX ldapport=3268 ldapprefix="ADS\"
ldapsuffix=""
 > ldaptls=1
 >
 >  Configuration in 11.5 Version.
 >
 > local all all ldap ldapserver=XXX ldapport=3268
 > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
 >
 > host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXX
 > ldapport=3268 ldapprefix="ADS\" ldapsuffix="" ldaptls=1
 >
 > host all someuser ::1/128 ldap ldapserver=XXX
ldapport=3268
 > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
 >
 > host all all 0.0.0.0/0   ldap
 > ldapserver=XXX ldapport=3268 ldapprefix="ADS\"
ldapsuffix=""
 > ldaptls=1
 >
 > host all all ::1/128 ldap ldapserver=XXX ldapport=3268
 > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
 >
 > host replication someuser 0.0.0.0/0 
 ldap
 > ldapserver=XXX ldapport=3268 ldapprefix="ADS\"
ldapsuffix=""
 > ldaptls=1
 >
 > host replication someuser 0.0.0.0/0 
 ldap
 > ldapserver=XXX ldapport=3268 ldapprefix="ADS\"
ldapsuffix=""
 > ldaptls=1
 >
 > host replication someuser 0.0.0.0/0 
 ldap
 > ldapserver=XXX ldapport=3268 ldapprefix="ADS\"
ldapsuffix=""
 > ldaptls=1
 >
 > host    replication replicator  X/22    md5
 >
 > host    replication replicator  1/22    md5
 >
 > Linux Version: Red Hat Enterprise Linux Server release 6.10
(Santiago)
 >
 > Server Installation is Source code installation. Custom build for
our
 > environment.
 >
 > Authentication logs from PG 11.5:
 >
 > 2020-02-24 00:00:15 MST [25089]:
 >

application=[unknown],host=xx.xx.xxx.xx(55742),user=[unknown],db=[unknown],state=0

 > LOG:  connection received: host=xx.xx.xxx.xx port=55742
 >
 > 2020-02-24 00:00:16 MST [25090]:
 >

application=[unknown],host=xx.xx.xxx.xx(55748),user=[unknown],db=[unknown],state=0

 > LOG:  connection received: host=xx.xx.xxx.xx port=55748
 >
 > 2020-02-24 00:00:16 MST [25092]:
 >

application=[unknown],host=xx.xx.xxx.xx(55765),user=[unknown],db=[unknown],state=0

 > LOG:  connection received: host=xx.xx.xxx.xx port=55765
 >
 > 2020-02-24 00:00:16 MST [25093]:
 >

application=[unknown],host=xx.xx.xxx.xx(55770),user=[unknown],db=[unknown],state=0

 > LOG:  connection received: host=xx.xx.xxx.xx port=55770
 >
 > 2020-02-24 00:00:17 MST [25090]:
 >

application=[unknown],host=xx.xx.xxx.xx(55748),user=Someuser,db=test_db,state=0

 > LOG:  connection authorized: user=Someuser database=test_db
 >
 > 2020-02-24 00:00:17 MST [25089]:
 >

application=[unknown],host=xx.xx.xxx.xx(55742),user=Someuser,db=test_db,state=0

 > LOG:  connection authorized: user=Someuser database=test_db
 >
 > 2020-02-24 00:00:17 MST [25092]:
 >

application=[unknown],host=xx.xx.xxx.xx(55765),user=Someuser,db=test_db,state=0

 > LOG:  connection authorized: user=Someuser database=test_db
  

Re: LDAP with TLS is taking more time in Postgresql 11.5

2020-02-25 Thread Mani Sankar
Hi Adrian,

Should I want to try this configuration?

Regards,
Mani.

On Tue, 25 Feb, 2020, 9:24 pm Adrian Klaver, 
wrote:

> On 2/24/20 9:07 PM, Mani Sankar wrote:
> Please reply to list also.
> Ccing list.
> > Hi Adrian,
> >
> > Thanks for replying. Below are the requested details.
> >
> >  Configuration in 9.4 PG Version
> >
> > local all all ldap ldapserver=XX ldapport=3268
> > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
> >
> > host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXX
> > ldapport=3268 ldapprefix="ADS\" ldapsuffix="" ldaptls=1
> >
> > host all someuser ::1/128 ldap ldapserver=XXX ldapport=3268
> > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
> >
> > host all all 0.0.0.0/0  ldap
> > ldapserver=XXX ldapport=3268 ldapprefix="ADS\" ldapsuffix=""
> > ldaptls=1
> >
> > host all all ::1/128 ldap ldapserver=XXX ldapport=3268
> > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
> >
> > host replication someuser 0.0.0.0/0  ldap
> > ldapserver=XXX ldapport=3268 ldapprefix="ADS\" ldapsuffix=""
> > ldaptls=1
> >
> > host replication someuser 0.0.0.0/0  ldap
> > ldapserver=XXX ldapport=3268 ldapprefix="ADS\" ldapsuffix=""
> > ldaptls=1
> >
> >  Configuration in 11.5 Version.
> >
> > local all all ldap ldapserver=XXX ldapport=3268
> > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
> >
> > host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXX
> > ldapport=3268 ldapprefix="ADS\" ldapsuffix="" ldaptls=1
> >
> > host all someuser ::1/128 ldap ldapserver=XXX ldapport=3268
> > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
> >
> > host all all 0.0.0.0/0  ldap
> > ldapserver=XXX ldapport=3268 ldapprefix="ADS\" ldapsuffix=""
> > ldaptls=1
> >
> > host all all ::1/128 ldap ldapserver=XXX ldapport=3268
> > ldapprefix="ADS\" ldapsuffix="" ldaptls=1
> >
> > host replication someuser 0.0.0.0/0  ldap
> > ldapserver=XXX ldapport=3268 ldapprefix="ADS\" ldapsuffix=""
> > ldaptls=1
> >
> > host replication someuser 0.0.0.0/0  ldap
> > ldapserver=XXX ldapport=3268 ldapprefix="ADS\" ldapsuffix=""
> > ldaptls=1
> >
> > host replication someuser 0.0.0.0/0  ldap
> > ldapserver=XXX ldapport=3268 ldapprefix="ADS\" ldapsuffix=""
> > ldaptls=1
> >
> > hostreplication replicator  X/22md5
> >
> > hostreplication replicator  1/22md5
> >
> > Linux Version: Red Hat Enterprise Linux Server release 6.10 (Santiago)
> >
> > Server Installation is Source code installation. Custom build for our
> > environment.
> >
> > Authentication logs from PG 11.5:
> >
> > 2020-02-24 00:00:15 MST [25089]:
> >
> application=[unknown],host=xx.xx.xxx.xx(55742),user=[unknown],db=[unknown],state=0
>
> > LOG:  connection received: host=xx.xx.xxx.xx port=55742
> >
> > 2020-02-24 00:00:16 MST [25090]:
> >
> application=[unknown],host=xx.xx.xxx.xx(55748),user=[unknown],db=[unknown],state=0
>
> > LOG:  connection received: host=xx.xx.xxx.xx port=55748
> >
> > 2020-02-24 00:00:16 MST [25092]:
> >
> application=[unknown],host=xx.xx.xxx.xx(55765),user=[unknown],db=[unknown],state=0
>
> > LOG:  connection received: host=xx.xx.xxx.xx port=55765
> >
> > 2020-02-24 00:00:16 MST [25093]:
> >
> application=[unknown],host=xx.xx.xxx.xx(55770),user=[unknown],db=[unknown],state=0
>
> > LOG:  connection received: host=xx.xx.xxx.xx port=55770
> >
> > 2020-02-24 00:00:17 MST [25090]:
> >
> application=[unknown],host=xx.xx.xxx.xx(55748),user=Someuser,db=test_db,state=0
>
> > LOG:  connection authorized: user=Someuser database=test_db
> >
> > 2020-02-24 00:00:17 MST [25089]:
> >
> application=[unknown],host=xx.xx.xxx.xx(55742),user=Someuser,db=test_db,state=0
>
> > LOG:  connection authorized: user=Someuser database=test_db
> >
> > 2020-02-24 00:00:17 MST [25092]:
> >
> application=[unknown],host=xx.xx.xxx.xx(55765),user=Someuser,db=test_db,state=0
>
> > LOG:  connection authorized: user=Someuser database=test_db
> >
> > 2020-02-24 00:00:17 MST [25093]:
> >
> application=[unknown],host=xx.xx.xxx.xx(55770),user=Someuser,db=test_db,state=0
>
> > LOG:  connection authorized: user=Someuser database=test_db
> >
> > Authentication logs from PG 9.4:
> >
> > 2020-02-17 22:40:01 MST [127575]:
> >
> application=[unknown],host=xx.xx.xx.xx(39451),user=[unknown],db=[unknown]
> LOG:
> > connection received: host=xx.xx.xx.xx port=39451
> >
> > 2020-02-17 22:40:01 MST [127575]:
> > application=[unknown],host=xx.xx.xx.xx(39451),user=Someuser,db=test_db
> > LOG:  connection authorized: user=Someuser database=test_db
> >
> > 2020-02-24 21:57:44 MST [117472]:
> >
> application=[unknown],host=xx.xx.xx.xx(58500),user=[unknown],db=[unknown]
> LOG:
> > connection received: host=xx.xx.xx.xx port=58500
> >
> > 

Re: Connections dropping while using Postgres backend DB with Ejabberd

2020-02-25 Thread Michael Lewis
work_mem can be used many times per connection given it is per sort, hash,
or other operations and as mentioned that can be multiplied if the query is
handled with parallel workers. I am guessing the server has 16GB memory
total given shared_buffers and effective_cache_size, and a more reasonable
work_mem setting might be on the order of 32-64MB.

Depending on the type of work being done and how quickly the application
releases the db connection once it is done, max connections might be on the
order of 4-20x the number of cores I would expect. If more simultaneous
users need to be serviced, a connection pooler like pgbouncer or pgpool
will allow those connections to be re-used quickly.

These numbers are generalizations based on my experience. Others with more
experience may have different configurations to recommend.

>


Re: Trigger

2020-02-25 Thread Adrian Klaver

On 2/25/20 9:08 AM, Alban Hertroys wrote:



On 25 Feb 2020, at 17:53, Adrian Klaver  wrote:

On 2/25/20 12:01 AM, Sonam Sharma wrote:

I have a trigger, like many other triggers that fire after
update and checks a field of the OLD set. For some reason this trigger throw 
this error:
ERROR: record "old" has no field "ivo_sts_cd" CONTEXT: SQL statement



if exc_count = 0 then
UPDATE pps.T8071_CAI_IVO_HDR SET IVO_STS_CD = 1 where 
T616_VBU_NBR=old.T616_VBU_NBR and T617_FNC_TYP_CD=old.T617_FNC_TYP_CD and
T8071_CAI_IVO_ID=old.T8071_CAI_IVO_ID and T8071_ADD_DM= old. T8071_ADD_DM and 
old.ivo_sts_cd != 10 and old.ivo_sts_cd != 3;


Realized I went through the above to quickly. I do not see a SET, nor am I 
clear what table you are trying to UPDATE.


I’m pretty sure that if the OP were to format their query in a more readable 
and consistent way, they would spot their error pretty quickly. It’s a simple 
typo.


Yeah, throwing it at:

http://sqlformat.darold.net/

returned:

UPDATE
pps.T8071_CAI_IVO_HDR
SET
IVO_STS_CD = 1
WHERE
T616_VBU_NBR = old.T616_VBU_NBR
AND T617_FNC_TYP_CD = old.T617_FNC_TYP_CD
AND T8071_CAI_IVO_ID = old.T8071_CAI_IVO_ID
AND T8071_ADD_DM = old. T8071_ADD_DM
AND old.ivo_sts_cd != 10
AND old.ivo_sts_cd != 3;

Found the SET:)



Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Trigger

2020-02-25 Thread Alban Hertroys


> On 25 Feb 2020, at 17:53, Adrian Klaver  wrote:
> 
> On 2/25/20 12:01 AM, Sonam Sharma wrote:
>> I have a trigger, like many other triggers that fire after
>> update and checks a field of the OLD set. For some reason this trigger throw 
>> this error:
>> ERROR: record "old" has no field "ivo_sts_cd" CONTEXT: SQL statement
> 
>> if exc_count = 0 then
>> UPDATE pps.T8071_CAI_IVO_HDR SET IVO_STS_CD = 1 where 
>> T616_VBU_NBR=old.T616_VBU_NBR and T617_FNC_TYP_CD=old.T617_FNC_TYP_CD and
>> T8071_CAI_IVO_ID=old.T8071_CAI_IVO_ID and T8071_ADD_DM= old. T8071_ADD_DM 
>> and old.ivo_sts_cd != 10 and old.ivo_sts_cd != 3;
> 
> Realized I went through the above to quickly. I do not see a SET, nor am I 
> clear what table you are trying to UPDATE.

I’m pretty sure that if the OP were to format their query in a more readable 
and consistent way, they would spot their error pretty quickly. It’s a simple 
typo.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.





Re: Connections dropping while using Postgres backend DB with Ejabberd

2020-02-25 Thread Justin
Hi Dipanjan

Please do not post to all the postgresql mailing list lets keep this on one
list at a time,  Keep this on general list

Am i reading this correctly 10,000 to 50,000 open connections.
Postgresql really is not meant to serve that many open connections.
Due to design of Postgresql  each client connection can use up to the
work_mem of 256MB plus  additional for parallel processes.  Memory will be
exhausted long before 50, connections is reached

I'm not surprised Postgresql and the server is showing issues long before
10K connections is reached.  The OS is probably throwing everything to the
swap file and see connections dropped or time out.

Should be using a connection pooler  to service this kind of load so the
Postgresql does not exhaust resources just from the open connections.
https://www.pgbouncer.org/


On Tue, Feb 25, 2020 at 11:29 AM Dipanjan Ganguly 
wrote:

> Greetings,
>
> I was trying to use postgresql database as a backend with Ejabberd XMPP
> server for load test (Using TSUNG).
>
> Noticed, while using Mnesia the  “simultaneous users and open TCP/UDP
> connections”  graph in Tsung report is showing consistency, but while using
> Postgres, we see drop in connections during 100 to 500 seconds of runtime,
> and then recovering and staying consistent.
>
> I have been trying to figure out what the issue could be without any
> success. I am kind of a noob in this technology, and hoping for some help
> from the good people from the community to understand the problem and how
> to fix this. Below are some details..
>
> · Postgres server utilization is low ( Avg load 1, Highest Cpu
> utilization 26%, lowest freemem  9000)
>
>
>
> Tsung  graph:
> [image: image.png]
>Graph 1: Postgres 12 Backen
> [image: image.png]
>
>   Graph 2: Mnesia backend
>
>
> · Ejabberd Server: Ubuntu 16.04, 16 GB ram, 4 core CPU.
>
> · Postgres on remote server: same config
>
> · Errors encountered during the same time:
>  error_connect_etimedout (same outcome for other 2 tests)
>
> · *Tsung Load:  *512 Bytes message size, user arrival rate 50/s,
> 80k registered users.
>
> · Postgres server utilization is low ( Avg load 1, Highest Cpu
> utilization 26%, lowest freemem  9000)
>
> · Same tsung.xm and userlist used for the tests in Mnesia and
> Postgres.
>
> *Postgres Configuration used:*
> shared_buffers = 4GB
> effective_cache_size = 12GB
> maintenance_work_mem = 1GB
> checkpoint_completion_target = 0.9
> wal_buffers = 16MB
> default_statistics_target = 100
> random_page_cost = 4
> effective_io_concurrency = 2
> work_mem = 256MB
> min_wal_size = 1GB
> max_wal_size = 2GB
> max_worker_processes = 4
> max_parallel_workers_per_gather = 2
> max_parallel_workers = 4
> max_parallel_maintenance_workers = 2
> max_connections=5
>
>
> Kindly help understanding this behavior.  Some advice on how to fix this
> will be a big help .
>
>
>
> Thanks,
>
> Dipanjan
>


Re: Trigger

2020-02-25 Thread Adrian Klaver

On 2/25/20 12:01 AM, Sonam Sharma wrote:

I have a trigger, like many other triggers that fire after

update and checks a field of the OLD set. For some reason this trigger 
throw this error:


ERROR: record "old" has no field "ivo_sts_cd" CONTEXT: SQL statement



if exc_count = 0 then

UPDATE pps.T8071_CAI_IVO_HDR SET IVO_STS_CD = 1 where 
T616_VBU_NBR=old.T616_VBU_NBR and T617_FNC_TYP_CD=old.T617_FNC_TYP_CD and


T8071_CAI_IVO_ID=old.T8071_CAI_IVO_ID and T8071_ADD_DM= old. 
T8071_ADD_DM and old.ivo_sts_cd != 10 and old.ivo_sts_cd != 3;


Realized I went through the above to quickly. I do not see a SET, nor am 
I clear what table you are trying to UPDATE.




end if;

RETURN NEW;

END

$function$;


Can someone please help where I am missing ..


Thanks,

Sonam




--
Adrian Klaver
adrian.kla...@aklaver.com




Connections dropping while using Postgres backend DB with Ejabberd

2020-02-25 Thread Dipanjan Ganguly
Greetings,

I was trying to use postgresql database as a backend with Ejabberd XMPP
server for load test (Using TSUNG).

Noticed, while using Mnesia the  “simultaneous users and open TCP/UDP
connections”  graph in Tsung report is showing consistency, but while using
Postgres, we see drop in connections during 100 to 500 seconds of runtime,
and then recovering and staying consistent.

I have been trying to figure out what the issue could be without any
success. I am kind of a noob in this technology, and hoping for some help
from the good people from the community to understand the problem and how
to fix this. Below are some details..

· Postgres server utilization is low ( Avg load 1, Highest Cpu
utilization 26%, lowest freemem  9000)



Tsung  graph:
[image: image.png]
   Graph 1: Postgres 12 Backen
[image: image.png]

  Graph 2: Mnesia backend


· Ejabberd Server: Ubuntu 16.04, 16 GB ram, 4 core CPU.

· Postgres on remote server: same config

· Errors encountered during the same time:  error_connect_etimedout
(same outcome for other 2 tests)

· *Tsung Load:  *512 Bytes message size, user arrival rate 50/s,
80k registered users.

· Postgres server utilization is low ( Avg load 1, Highest Cpu
utilization 26%, lowest freemem  9000)

· Same tsung.xm and userlist used for the tests in Mnesia and
Postgres.

*Postgres Configuration used:*
shared_buffers = 4GB
effective_cache_size = 12GB
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 4
effective_io_concurrency = 2
work_mem = 256MB
min_wal_size = 1GB
max_wal_size = 2GB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_parallel_workers = 4
max_parallel_maintenance_workers = 2
max_connections=5


Kindly help understanding this behavior.  Some advice on how to fix this
will be a big help .



Thanks,

Dipanjan


Re: Trigger

2020-02-25 Thread Adrian Klaver

On 2/25/20 12:01 AM, Sonam Sharma wrote:

I have a trigger, like many other triggers that fire after

update and checks a field of the OLD set. For some reason this trigger 
throw this error:


ERROR: record "old" has no field "ivo_sts_cd" CONTEXT: SQL statement

"UPDATE dbo.T8071_CAI_IVO_HDR SET IVO_STS_CD = 1 where 
T616_VBU_NBR=old.T616_VBU_NBR


and T617_FNC_TYP_CD=old.T617_FNC_TYP_CD and 
T8071_CAI_IVO_ID=old.T8071_CAI_IVO_ID and


T8071_ADD_DM= old. T8071_ADD_DM and old.ivo_sts_cd != 10 and 
old.ivo_sts_cd != 3" PL/pgSQL


function dbo.t8096_upd_func02() line 9 at SQL statement SQL state: 42703

this is the trigger :

t8096_upd_trig02 AFTER UPDATE ON dbo.t8096_cai_ivo_exc FOR EACH ROW WHEN 
(old.exc_rsd_idc IS DISTINCT FROM new.exc_rsd_idc) EXECUTE PROCEDURE 
dbo.t8096_upd_func02()


below is the function :

CREATE OR REPLACE FUNCTION pps.t8096_upd_func02()

RETURNS trigger

LANGUAGE plpgsql

AS $function$

DECLARE

exc_count int;

BEGIN

select count(*) into exc_count from pps.T8096_CAI_IVO_EXC WHERE 
T616_VBU_NBR=old.T616_VBU_NBR


and T617_FNC_TYP_CD=old.T617_FNC_TYP_CD and 
T8071_CAI_IVO_ID=old.T8071_CAI_IVO_ID and T8071_ADD_DM= old.T8071_ADD_DM 
and EXC_RSd_IDC = 'N'


AND T8095_EXC_TYP_CD NOT IN(8,9,10,11) ;

if exc_count = 0 then

UPDATE pps.T8071_CAI_IVO_HDR SET IVO_STS_CD = 1 where 
T616_VBU_NBR=old.T616_VBU_NBR and T617_FNC_TYP_CD=old.T617_FNC_TYP_CD and


T8071_CAI_IVO_ID=old.T8071_CAI_IVO_ID and T8071_ADD_DM= old. 
T8071_ADD_DM and old.ivo_sts_cd != 10 and old.ivo_sts_cd != 3;


end if;

RETURN NEW;

END

$function$;


Can someone please help where I am missing ..


Does the table dbo.t8096_cai_ivo_exc have the field ivo_sts_cd?




Thanks,

Sonam




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: LDAP with TLS is taking more time in Postgresql 11.5

2020-02-25 Thread Adrian Klaver

On 2/24/20 9:07 PM, Mani Sankar wrote:
Please reply to list also.
Ccing list.

Hi Adrian,

Thanks for replying. Below are the requested details.

 Configuration in 9.4 PG Version

local all all ldap ldapserver=XX ldapport=3268 
ldapprefix="ADS\" ldapsuffix="" ldaptls=1


host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXX 
ldapport=3268 ldapprefix="ADS\" ldapsuffix="" ldaptls=1


host all someuser ::1/128 ldap ldapserver=XXX ldapport=3268 
ldapprefix="ADS\" ldapsuffix="" ldaptls=1


host all all 0.0.0.0/0  ldap 
ldapserver=XXX ldapport=3268 ldapprefix="ADS\" ldapsuffix="" 
ldaptls=1


host all all ::1/128 ldap ldapserver=XXX ldapport=3268 
ldapprefix="ADS\" ldapsuffix="" ldaptls=1


host replication someuser 0.0.0.0/0  ldap 
ldapserver=XXX ldapport=3268 ldapprefix="ADS\" ldapsuffix="" 
ldaptls=1


host replication someuser 0.0.0.0/0  ldap 
ldapserver=XXX ldapport=3268 ldapprefix="ADS\" ldapsuffix="" 
ldaptls=1


 Configuration in 11.5 Version.

local all all ldap ldapserver=XXX ldapport=3268 
ldapprefix="ADS\" ldapsuffix="" ldaptls=1


host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXX 
ldapport=3268 ldapprefix="ADS\" ldapsuffix="" ldaptls=1


host all someuser ::1/128 ldap ldapserver=XXX ldapport=3268 
ldapprefix="ADS\" ldapsuffix="" ldaptls=1


host all all 0.0.0.0/0  ldap 
ldapserver=XXX ldapport=3268 ldapprefix="ADS\" ldapsuffix="" 
ldaptls=1


host all all ::1/128 ldap ldapserver=XXX ldapport=3268 
ldapprefix="ADS\" ldapsuffix="" ldaptls=1


host replication someuser 0.0.0.0/0  ldap 
ldapserver=XXX ldapport=3268 ldapprefix="ADS\" ldapsuffix="" 
ldaptls=1


host replication someuser 0.0.0.0/0  ldap 
ldapserver=XXX ldapport=3268 ldapprefix="ADS\" ldapsuffix="" 
ldaptls=1


host replication someuser 0.0.0.0/0  ldap 
ldapserver=XXX ldapport=3268 ldapprefix="ADS\" ldapsuffix="" 
ldaptls=1


host    replication replicator  X/22    md5

host    replication replicator  1/22    md5

Linux Version: Red Hat Enterprise Linux Server release 6.10 (Santiago)

Server Installation is Source code installation. Custom build for our 
environment.


Authentication logs from PG 11.5:

2020-02-24 00:00:15 MST [25089]: 
application=[unknown],host=xx.xx.xxx.xx(55742),user=[unknown],db=[unknown],state=0 
LOG:  connection received: host=xx.xx.xxx.xx port=55742


2020-02-24 00:00:16 MST [25090]: 
application=[unknown],host=xx.xx.xxx.xx(55748),user=[unknown],db=[unknown],state=0 
LOG:  connection received: host=xx.xx.xxx.xx port=55748


2020-02-24 00:00:16 MST [25092]: 
application=[unknown],host=xx.xx.xxx.xx(55765),user=[unknown],db=[unknown],state=0 
LOG:  connection received: host=xx.xx.xxx.xx port=55765


2020-02-24 00:00:16 MST [25093]: 
application=[unknown],host=xx.xx.xxx.xx(55770),user=[unknown],db=[unknown],state=0 
LOG:  connection received: host=xx.xx.xxx.xx port=55770


2020-02-24 00:00:17 MST [25090]: 
application=[unknown],host=xx.xx.xxx.xx(55748),user=Someuser,db=test_db,state=0 
LOG:  connection authorized: user=Someuser database=test_db


2020-02-24 00:00:17 MST [25089]: 
application=[unknown],host=xx.xx.xxx.xx(55742),user=Someuser,db=test_db,state=0 
LOG:  connection authorized: user=Someuser database=test_db


2020-02-24 00:00:17 MST [25092]: 
application=[unknown],host=xx.xx.xxx.xx(55765),user=Someuser,db=test_db,state=0 
LOG:  connection authorized: user=Someuser database=test_db


2020-02-24 00:00:17 MST [25093]: 
application=[unknown],host=xx.xx.xxx.xx(55770),user=Someuser,db=test_db,state=0 
LOG:  connection authorized: user=Someuser database=test_db


Authentication logs from PG 9.4:

2020-02-17 22:40:01 MST [127575]: 
application=[unknown],host=xx.xx.xx.xx(39451),user=[unknown],db=[unknown] LOG:  
connection received: host=xx.xx.xx.xx port=39451


2020-02-17 22:40:01 MST [127575]: 
application=[unknown],host=xx.xx.xx.xx(39451),user=Someuser,db=test_db 
LOG:  connection authorized: user=Someuser database=test_db


2020-02-24 21:57:44 MST [117472]: 
application=[unknown],host=xx.xx.xx.xx(58500),user=[unknown],db=[unknown] LOG:  
connection received: host=xx.xx.xx.xx port=58500


2020-02-24 21:57:44 MST [117472]: 
application=[unknown],host=xx.xx.xx.xx(58500),user=Someuser,db=test_db 
LOG:  connection authorized: user=Someuser database=test_db


2020-02-24 21:58:27 MST [117620]: 
application=[unknown],host=xx.xx.xx.xx(58520),user=[unknown],db=[unknown] LOG:  
connection received: host=xx.xx.xx.xx port=58520


2020-02-24 21:58:27 MST [117620]: 
application=[unknown],host=xx.xx.xx.xx(58520),user=Someuser,db=test_db 
LOG:  connection authorized: user=Someuser database=test_db


2020-02-24 21:58:31 MST 

Highly academic: local etcd & Patroni Cluster for testing on a single host

2020-02-25 Thread Paul Förster
Hi,

I have set up an etcd & Patroni cluster on a single machine for testing 
purposes as follows:

/data/pg01a/db as data directory for the first "node"
/data/pg01b/db as data directory for the second "node"

I have set up Patroni to make each PostgreSQL database cluster archive to its 
own destination, so they won't interfere with each other. All is well and 
working fine so far. Failover/Switchover works, they are syncing properly, etc.

Now:

Inside /data/pg01a and /data/pg01b I also have a directory ts, i.e. 
/data/pg01a/ts and /data/pg01b/ts to simulate different filesystems.

If I do a 'create tablespace' and specify its location, I must of course 
specify either /data/pg01a/ts or /data/pg01b/ts.

Files with Tables get created as usual, but since it's a replication (sync, by 
the way), the other tablespace directory will not hold any files, but instead, 
the replica uses the same files as does the master.

Is there any way around that, i.e. make the replica use its own files? I found 
a way to make each archive destination individual despite Patroni, but I can't 
seem to find a way to do something similar to the tablespaces. Even correcting 
the symlinks in ${PGDATA}/pg_tblspc manually (and then restart) does not work.

The explanation is simple: the location is stored inside the database cluster 
information and is per definition the same across all nodes. So, if 'a' is 
master and I do "create tablespace test location '/data/pg01a/ts'" it creates 
files there with the replica 'b' using the *same* destination.

Still, is there a way to make each node store that information individually? I 
know, this is highly academic but I only have a single host to do etcd & 
Patroni cluster experiments on.

Cheers,
Paul



Re: a proposal for a new functionality: "SELECT * [EXCEPT col1 [,col2]]

2020-02-25 Thread Miles Elam
On Tue, Feb 25, 2020 at 6:41 AM Josef Šimánek 
wrote:

>
> út 25. 2. 2020 v 15:35 odesílatel Miles Elam 
> napsal:
>
>> How do you see this syntax working in a JOIN query?
>>
>> SELECT x.* EXCEPT x.col1, x.col2, y.col1
>> FROM tablex AS x
>>   LEFT JOIN tabley AS y;
>>
>> The column(s) you want to exclude become ambiguous.
>>
>
> Can you explain how are those column(s) ambiguous in your example? I would
> expect to select everything from table x (as SELECT x.* should do) except
> x.col1 and x.col2. Nothing is selected from table y thus y.col1 is not
> relevant here (the question is if this is problem or not - raise, ignore?).
>

Do you mean
  "select everything from tablex except for tablex.col1, and also select
tablex.col2 and tabley.col1"
or
  "select everything from tablex except for tablex.col1 AND tablex.col2,
and also select tabley.col1"
?

It's entirely possible to specify a column twice. It's quite common for me
to see what fields I need from a table by doing a "SELECT * ... LIMIT 1"
and then "SELECT col1, * ... LIMIT 1" as I refine the query, eventually
eliminating the wildcard when I'm done. (When I'm using an IDE that doesn't
support SQL table/column autocomplete.)

EXCEPT would need to be scoped as to which columns it's meant to be
excluding without ambiguity. Just reading from the column list until you
hit another table's columns or a function strikes me as far too loose.


Re: a proposal for a new functionality: "SELECT * [EXCEPT col1 [,col2]]

2020-02-25 Thread Josef Šimánek
út 25. 2. 2020 v 15:35 odesílatel Miles Elam 
napsal:

> How do you see this syntax working in a JOIN query?
>
> SELECT x.* EXCEPT x.col1, x.col2, y.col1
> FROM tablex AS x
>   LEFT JOIN tabley AS y;
>
> The column(s) you want to exclude become ambiguous.
>

Can you explain how are those column(s) ambiguous in your example? I would
expect to select everything from table x (as SELECT x.* should do) except
x.col1 and x.col2. Nothing is selected from table y thus y.col1 is not
relevant here (the question is if this is problem or not - raise, ignore?).


> Parentheses?
>
> SELECT x.* EXCEPT (x.col1, x.col2), y.col1
> FROM tablex AS x
>   LEFT JOIN tabley AS y;
>
> Could work, but this is encouraging the use of the wildcard selector,
> which I'm not sure is a productive or maintainable goal. In exchange for
> flexibility, you've added a non-trivial amount of comprehension complexity.
> I'm not a big fan of the wildcard selector except in the most trivial cases
> and even then only as part of development toward a final query with all
> columns specified. Then again I try not to have tables with hundreds of
> columns (or even tens in most cases), so my own use cases may bias
> me. Personally I just don't like queries where I cannot clearly see what it
> being returned to me. Anything that makes that ambiguity more popular will
> be viewed with a skeptical eye.
>
>
> On Tue, Feb 25, 2020 at 2:18 AM Stanislav Motyčka <
> stanislav.moty...@gmail.com> wrote:
>
>> Hello,
>>
>> Sometimes (for tables with many columns) it would be better and easier to
>> write "SELECT" statement with clause "EXCEPT":
>> "SELECT * [EXCEPT col1 [,col2]] FROM ..."
>>
>> It's easier to write "except" one or two columns from all (*) as to write
>> names of all columns besides one or two.
>> What do you thin about it?
>>
>> Best regards
>> Stano Motycka
>>
>>


Re: Upgrade to 12.2 using same data directory

2020-02-25 Thread Laurenz Albe
On Tue, 2020-02-25 at 11:58 +, Mihalidesová Jana wrote:
> Is there possibility to upgrade from 10.6 to 12.2 using exact same data 
> directory?
> First problem is with initialing of “new” cluster. Then pg_upgrade check fail.
> 
> ./pg_upgrade -b $OLD/bin -B $NEWd/bin -d $OLD -D $OLD -k -c

No, but that is not necessary.

- Create a new cluster on the same file system as the old one.

- Run "pg_upgrade" with the -k option.

- Remove the old cluster and move the new one in the place of
  the old one.

- Start the server (with the new binaries of course).

Don't forget to ANALYZE.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: a proposal for a new functionality: "SELECT * [EXCEPT col1 [,col2]]

2020-02-25 Thread Miles Elam
How do you see this syntax working in a JOIN query?

SELECT x.* EXCEPT x.col1, x.col2, y.col1
FROM tablex AS x
  LEFT JOIN tabley AS y;

The column(s) you want to exclude become ambiguous. Parentheses?

SELECT x.* EXCEPT (x.col1, x.col2), y.col1
FROM tablex AS x
  LEFT JOIN tabley AS y;

Could work, but this is encouraging the use of the wildcard selector, which
I'm not sure is a productive or maintainable goal. In exchange for
flexibility, you've added a non-trivial amount of comprehension complexity.
I'm not a big fan of the wildcard selector except in the most trivial cases
and even then only as part of development toward a final query with all
columns specified. Then again I try not to have tables with hundreds of
columns (or even tens in most cases), so my own use cases may bias
me. Personally I just don't like queries where I cannot clearly see what it
being returned to me. Anything that makes that ambiguity more popular will
be viewed with a skeptical eye.


On Tue, Feb 25, 2020 at 2:18 AM Stanislav Motyčka <
stanislav.moty...@gmail.com> wrote:

> Hello,
>
> Sometimes (for tables with many columns) it would be better and easier to
> write "SELECT" statement with clause "EXCEPT":
> "SELECT * [EXCEPT col1 [,col2]] FROM ..."
>
> It's easier to write "except" one or two columns from all (*) as to write
> names of all columns besides one or two.
> What do you thin about it?
>
> Best regards
> Stano Motycka
>
>


Re: Backup & Restore

2020-02-25 Thread Stephen Frost
Greetings,

* sivapostg...@yahoo.com (sivapostg...@yahoo.com) wrote:
>  HiCan u suggest a good backup solution for a windows installation ?  Looks 
> like the suggested two [ pgbarman, pgbackrest ] works only in Linux.

While it's certainly something we'd like to do, we haven't ported
pgbackrest to Windows yet.  That said, it's now entirely written in
reasonably portable C and so it shouldn't be too much effort to port it.

Until that's done though, and I can't say exactly when that port will
happen, your best option is probably pg_basebackup.

Of course, I'd strongly recommend you consider running PG on Linux
instead, particularly for a production environment.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: Upgrade to 12.2 using same data directory

2020-02-25 Thread Paul Förster
Hi J,

you can do the following:

mkdir /alias/data/dbname-new
initdb -D /alias/data/dbname-new
pg_upgrade \
-b /old/bin/dir \
-B /new/bin/dir \
-d /alias/data/dbname \
-D /alias/data/dbname-new \
-k

After the successful upgrade:

rm -rf /alias/data/dbname
mv /alias/data/dbname-new /alias/data/dbname

This is how I always do it. You can't initialize a new database cluster into 
the same data directory as the old one(!!), which makes sense, but which is 
exactly what you are trying to do and why it fails on you. Doing it the way 
described above has the advantages, that

a) you don't need double disk space, espacially in a commercial environment, 
and which is ugly if pg_upgrade has to copy terabytes of data, and

b) saves time because there is no file copy, and

c) you then have the same data directory naming conventions that you are used to

Also see:

initdb --help
pg_upgrade --help

You can properly put this into a script. My last migrations (11.5 to 12.1) took 
about 18 seconds each. This interruption usually can be communicated to the 
users.

If they're used to Oracle, they're used to interruptions in the range 90-120 
minutes. :-D So there should be no problems with about a minute! But that of 
course requires proper planning and proper scripting. :-) Practice that 
(including the scripting!) on an identical test database setup in advance.

Hope this helps,
Paul


> On 25. Feb, 2020, at 13:52, Mihalidesová Jana  
> wrote:
> 
> I think it’s strange to change the data directory $PATH every time I upgrade 
> the software, when the data are the same. I have a specific naming convention 
> not using a db version.
>  
> /alias/data/
>  
> J
>  
> From: Josef Šimánek  
> Sent: Tuesday, February 25, 2020 1:05 PM
> To: Mihalidesová Jana 
> Cc: pgsql-general@lists.postgresql.org
> Subject: Re: Upgrade to 12.2 using same data directory
>  
> I think that's not possible. What's the reason to reuse old directory?
>  
> út 25. 2. 2020 v 12:59 odesílatel Mihalidesová Jana 
>  napsal:
> Hi,
>  
> Is there possibility to upgrade from 10.6 to 12.2 using exact same data 
> directory? First problem is with initialing of “new” cluster. Then pg_upgrade 
> check fail.
>  
>  
> ./pg_upgrade -b $OLD/bin -B $NEWd/bin -d $OLD -D $OLD -k -c
>  
>  
> Thx,
> Jana
> 
> 
> Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na 
> uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či 
> jejich změny jsou společností CETIN a.s. uzavírány v písemné formě nebo v 
> podobě a postupem podle příslušných všeobecných podmínek společnosti CETIN 
> a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány 
> oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí 
> jsou uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s 
> uznávaným elektronickým podpisem. Podmínky, za nichž CETIN a.s. přistupuje k 
> jednání o smlouvě a jakými se řídí, jsou dostupné zde.
> 
> The content of this message is intended for communication purposes only. It 
> does neither represent any contract proposal, nor its amendment or acceptance 
> of any potential contract proposal. CETIN a.s. concludes contracts or 
> amendments thereto in a written form or in the form and the procedure in 
> accordance with relevant general terms and conditions of CETIN a.s., if all 
> requirements are agreed. Contracts are concluded by an authorized person 
> entitled on the basis of a written authorization. Contracts on a future 
> contract are concluded solely in a written form, self-signed or signed by 
> means of an advanced electronic signature. The conditions under which CETIN 
> a.s. negotiates contracts and under which it proceeds are available here.
> 
> 
> Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na 
> uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či 
> jejich změny jsou společností CETIN a.s. uzavírány v písemné formě nebo v 
> podobě a postupem podle příslušných všeobecných podmínek společnosti CETIN 
> a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány 
> oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí 
> jsou uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s 
> uznávaným elektronickým podpisem. Podmínky, za nichž CETIN a.s. přistupuje k 
> jednání o smlouvě a jakými se řídí, jsou dostupné zde.
> 
> The content of this message is intended for communication purposes only. It 
> does neither represent any contract proposal, nor its amendment or acceptance 
> of any potential contract proposal. CETIN a.s. concludes contracts or 
> amendments thereto in a written form or in the form and the procedure in 
> accordance with relevant general terms and conditions of CETIN a.s., if all 
> requirements are agreed. Contracts are concluded by an authorized person 
> entitled on the basis of a written authorization. Contracts on a future 
> contract 

Re: Upgrade to 12.2 using same data directory

2020-02-25 Thread Sushant Pawar
Yes, Initdb will not work if there are existing cluster data files.

You will have to initialize the  Postgres 12 cluster to a new data
directory path and later you can use either pg_upgrade normal command or
along with an in-place upgrade option. Just a note, make sure you have a
valid backup of the existing cluster before you start with the upgrade.

initialize the new cluster
initdb -D *$PGDATA12*

and then

./pg_upgrade -b $OLD/bin -B $NEW/bin -d $PG_DATA -D *$PGDATA12* -k -c


Best Regards,

*Sushant Pawar | Database Solution Consultant*

*ASHNIK TECHNOLOGY SOLUTIONS PVT. LTD.*

Skype: sush_531 | T: +91 9769559995 <%2B65%206438%203504> | www.ashnik.com



On Tue, Feb 25, 2020 at 6:25 PM Mihalidesová Jana <
jana.mihalides...@cetin.cz> wrote:

> Hi,
>
>
>
> The -k option doesn’t work. First is the problem to initialize new cluster
> to the same directory.
>
>
>
>
>
> initdb: directory "$PATH" exists but is not empty
>
> If you want to create a new database system, either remove or empty
>
> the directory "$PATH" or run initdb
>
> with an argument other than "$PATH".
>
>
>
>
>
> And if I try to check upgrade from to same data directory
>
>
>
>
>
> postgres@celpgsqld401.server.cetin:/pgsql/bin/12.2_nipjd/bin>
> ./pg_upgrade -b $OLD/bin -B $NEW/bin -d $PG_DATA -D $PG_DATA -k -c
>
> Performing Consistency Checks
>
> -
>
> Checking cluster versions
>
> This utility can only upgrade to PostgreSQL version 12.
>
> Failure, exiting
>
>
>
>
>
> Jana
>
>
>
> *From:* Sushant Pawar 
> *Sent:* Tuesday, February 25, 2020 1:23 PM
> *To:* Mihalidesová Jana 
> *Cc:* Josef Šimánek ;
> pgsql-general@lists.postgresql.org
> *Subject:* Re: Upgrade to 12.2 using same data directory
>
>
>
> Hi Mihalidesová,
>
>
>
> It should work with -k option. Can you share the error that you have
> received? You can execute pg_upgrade with -c option to identify any
> possible manual change required.
>
>
>
>
> Best Regards,
>
> *Sushant Pawar | Database Solution Consultant*
>
> *ASHNIK TECHNOLOGY SOLUTIONS PVT. LTD.*
>
> Skype: sush_531 | T: +91 9769559995 <%2B65%206438%203504> | www.ashnik.com
>
>
>
>
>
>
>
> On Tue, Feb 25, 2020 at 5:35 PM Josef Šimánek 
> wrote:
>
> I think that's not possible. What's the reason to reuse old directory?
>
>
>
> út 25. 2. 2020 v 12:59 odesílatel Mihalidesová Jana <
> jana.mihalides...@cetin.cz> napsal:
>
> Hi,
>
>
>
> Is there possibility to upgrade from 10.6 to 12.2 using exact same data
> directory? First problem is with initialing of “new” cluster. Then
> pg_upgrade check fail.
>
>
>
>
>
> ./pg_upgrade -b $OLD/bin -B $NEWd/bin -d $OLD -D $OLD -k -c
>
>
>
>
>
> Thx,
>
> Jana
>
>
>
> Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na
> uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či
> jejich změny jsou společností CETIN a.s. uzavírány v písemné formě nebo v
> podobě a postupem podle příslušných všeobecných podmínek společnosti CETIN
> a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány
> oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí
> jsou uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s
> uznávaným elektronickým podpisem. Podmínky, za nichž CETIN a.s. přistupuje
> k jednání o smlouvě a jakými se řídí, jsou dostupné zde
> .
>
> The content of this message is intended for communication purposes only.
> It does neither represent any contract proposal, nor its amendment or
> acceptance of any potential contract proposal. CETIN a.s. concludes
> contracts or amendments thereto in a written form or in the form and the
> procedure in accordance with relevant general terms and conditions of CETIN
> a.s., if all requirements are agreed. Contracts are concluded by an
> authorized person entitled on the basis of a written authorization.
> Contracts on a future contract are concluded solely in a written form,
> self-signed or signed by means of an advanced electronic signature. The
> conditions under which CETIN a.s. negotiates contracts and under which it
> proceeds are available here
> .
>
>
>
> Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na
> uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či
> jejich změny jsou společností CETIN a.s. uzavírány v písemné formě nebo v
> podobě a postupem podle příslušných všeobecných podmínek společnosti CETIN
> a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány
> oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí
> jsou uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s
> uznávaným elektronickým podpisem. Podmínky, za nichž CETIN a.s. přistupuje
> k jednání o smlouvě a jakými se řídí, jsou dostupné zde
> .
>
> The content of this message is intended for 

Re: Upgrade to 12.2 using same data directory

2020-02-25 Thread Ray O'Donnell
On 25/02/2020 12:52, Mihalidesová Jana wrote:
> I think it’s strange to change the data directory $PATH every time I
> upgrade the software, when the data are the same. I have a specific
> naming convention not using a db version.

There's a lot to recommend keeping an untouched copy of your old data,
in case something goes wrong

Ray.

-- 
Raymond O'Donnell // Galway // Ireland
r...@rodonnell.ie




RE: Upgrade to 12.2 using same data directory

2020-02-25 Thread Mihalidesová Jana
Yes,  that’s what I though, but the pre-upgrade check will be done with 
downtime.
Ok,
Thanks.

J


From: Josef Šimánek 
Sent: Tuesday, February 25, 2020 1:54 PM
To: Mihalidesová Jana 
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Upgrade to 12.2 using same data directory

I think it should be possible to rename old cluster data dir (you can use some 
suffix) and reuse your current directory name for upgraded cluster.

something similar to (not tested):
mv /alias/data/ /alias/data/.old
./pg_upgrade -b $OLD/bin -B $NEWd/bin -d /alias/data/.old -D 
/alias/data/ -k -c

út 25. 2. 2020 v 13:52 odesílatel Mihalidesová Jana 
mailto:jana.mihalides...@cetin.cz>> napsal:
I think it’s strange to change the data directory $PATH every time I upgrade 
the software, when the data are the same. I have a specific naming convention 
not using a db version.

/alias/data/

J

From: Josef Šimánek mailto:josef.sima...@gmail.com>>
Sent: Tuesday, February 25, 2020 1:05 PM
To: Mihalidesová Jana 
mailto:jana.mihalides...@cetin.cz>>
Cc: 
pgsql-general@lists.postgresql.org
Subject: Re: Upgrade to 12.2 using same data directory

I think that's not possible. What's the reason to reuse old directory?

út 25. 2. 2020 v 12:59 odesílatel Mihalidesová Jana 
mailto:jana.mihalides...@cetin.cz>> napsal:
Hi,

Is there possibility to upgrade from 10.6 to 12.2 using exact same data 
directory? First problem is with initialing of “new” cluster. Then pg_upgrade 
check fail.


./pg_upgrade -b $OLD/bin -B $NEWd/bin -d $OLD -D $OLD -k -c


Thx,
Jana


Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na 
uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či 
jejich změny jsou společností CETIN a.s. uzavírány v písemné formě nebo v 
podobě a postupem podle příslušných všeobecných podmínek společnosti CETIN 
a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány 
oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí jsou 
uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s uznávaným 
elektronickým podpisem. Podmínky, za nichž CETIN a.s. přistupuje k jednání o 
smlouvě a jakými se řídí, jsou dostupné 
zde.

The content of this message is intended for communication purposes only. It 
does neither represent any contract proposal, nor its amendment or acceptance 
of any potential contract proposal. CETIN a.s. concludes contracts or 
amendments thereto in a written form or in the form and the procedure in 
accordance with relevant general terms and conditions of CETIN a.s., if all 
requirements are agreed. Contracts are concluded by an authorized person 
entitled on the basis of a written authorization. Contracts on a future 
contract are concluded solely in a written form, self-signed or signed by means 
of an advanced electronic signature. The conditions under which CETIN a.s. 
negotiates contracts and under which it proceeds are available 
here.


Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na 
uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či 
jejich změny jsou společností CETIN a.s. uzavírány v písemné formě nebo v 
podobě a postupem podle příslušných všeobecných podmínek společnosti CETIN 
a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány 
oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí jsou 
uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s uznávaným 
elektronickým podpisem. Podmínky, za nichž CETIN a.s. přistupuje k jednání o 
smlouvě a jakými se řídí, jsou dostupné 
zde.

The content of this message is intended for communication purposes only. It 
does neither represent any contract proposal, nor its amendment or acceptance 
of any potential contract proposal. CETIN a.s. concludes contracts or 
amendments thereto in a written form or in the form and the procedure in 
accordance with relevant general terms and conditions of CETIN a.s., if all 
requirements are agreed. Contracts are concluded by an authorized person 
entitled on the basis of a written authorization. Contracts on a future 
contract are concluded solely in a written form, self-signed or signed by means 
of an advanced electronic signature. The conditions under which CETIN a.s. 
negotiates contracts and under which it proceeds are available 
here.


Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na 
uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či 
jejich změny jsou společností CETIN a.s. uzavírány v písemné formě nebo v 
podobě a postupem podle příslušných všeobecných podmínek společnosti CETIN 
a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou 

RE: Upgrade to 12.2 using same data directory

2020-02-25 Thread Mihalidesová Jana
Hi,

The -k option doesn’t work. First is the problem to initialize new cluster to 
the same directory.


initdb: directory "$PATH" exists but is not empty
If you want to create a new database system, either remove or empty
the directory "$PATH" or run initdb
with an argument other than "$PATH".


And if I try to check upgrade from to same data directory


postgres@celpgsqld401.server.cetin:/pgsql/bin/12.2_nipjd/bin> ./pg_upgrade -b 
$OLD/bin -B $NEW/bin -d $PG_DATA -D $PG_DATA -k -c
Performing Consistency Checks
-
Checking cluster versions
This utility can only upgrade to PostgreSQL version 12.
Failure, exiting


Jana

From: Sushant Pawar 
Sent: Tuesday, February 25, 2020 1:23 PM
To: Mihalidesová Jana 
Cc: Josef Šimánek ; pgsql-general@lists.postgresql.org
Subject: Re: Upgrade to 12.2 using same data directory

Hi Mihalidesová,

It should work with -k option. Can you share the error that you have received? 
You can execute pg_upgrade with -c option to identify any possible manual 
change required.



Best Regards,

Sushant Pawar | Database Solution Consultant

ASHNIK TECHNOLOGY SOLUTIONS PVT. LTD.

Skype: sush_531 | T: +91 9769559995 | 
www.ashnik.com



On Tue, Feb 25, 2020 at 5:35 PM Josef Šimánek 
mailto:josef.sima...@gmail.com>> wrote:
I think that's not possible. What's the reason to reuse old directory?

út 25. 2. 2020 v 12:59 odesílatel Mihalidesová Jana 
mailto:jana.mihalides...@cetin.cz>> napsal:
Hi,

Is there possibility to upgrade from 10.6 to 12.2 using exact same data 
directory? First problem is with initialing of “new” cluster. Then pg_upgrade 
check fail.


./pg_upgrade -b $OLD/bin -B $NEWd/bin -d $OLD -D $OLD -k -c


Thx,
Jana


Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na 
uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či 
jejich změny jsou společností CETIN a.s. uzavírány v písemné formě nebo v 
podobě a postupem podle příslušných všeobecných podmínek společnosti CETIN 
a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány 
oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí jsou 
uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s uznávaným 
elektronickým podpisem. Podmínky, za nichž CETIN a.s. přistupuje k jednání o 
smlouvě a jakými se řídí, jsou dostupné 
zde.

The content of this message is intended for communication purposes only. It 
does neither represent any contract proposal, nor its amendment or acceptance 
of any potential contract proposal. CETIN a.s. concludes contracts or 
amendments thereto in a written form or in the form and the procedure in 
accordance with relevant general terms and conditions of CETIN a.s., if all 
requirements are agreed. Contracts are concluded by an authorized person 
entitled on the basis of a written authorization. Contracts on a future 
contract are concluded solely in a written form, self-signed or signed by means 
of an advanced electronic signature. The conditions under which CETIN a.s. 
negotiates contracts and under which it proceeds are available 
here.


Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na 
uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či 
jejich změny jsou společností CETIN a.s. uzavírány v písemné formě nebo v 
podobě a postupem podle příslušných všeobecných podmínek společnosti CETIN 
a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány 
oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí jsou 
uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s uznávaným 
elektronickým podpisem. Podmínky, za nichž CETIN a.s. přistupuje k jednání o 
smlouvě a jakými se řídí, jsou dostupné 
zde.

The content of this message is intended for communication purposes only. It 
does neither represent any contract proposal, nor its amendment or acceptance 
of any potential contract proposal. CETIN a.s. concludes contracts or 
amendments thereto in a written form or in the form and the procedure in 
accordance with relevant general terms and conditions of CETIN a.s., if all 
requirements are agreed. Contracts are concluded by an authorized person 
entitled on the basis of a written authorization. Contracts on a future 
contract are concluded solely in a written form, self-signed or signed by means 
of an advanced electronic signature. The conditions under which CETIN a.s. 
negotiates contracts and under which it proceeds are available 
here.


Re: Upgrade to 12.2 using same data directory

2020-02-25 Thread Josef Šimánek
I think it should be possible to rename old cluster data dir (you can use
some suffix) and reuse your current directory name for upgraded cluster.

something similar to (not tested):
mv /alias/data/ /alias/data/.old
./pg_upgrade -b $OLD/bin -B $NEWd/bin -d /alias/data/.old -D
/alias/data/ -k -c

út 25. 2. 2020 v 13:52 odesílatel Mihalidesová Jana <
jana.mihalides...@cetin.cz> napsal:

> I think it’s strange to change the data directory $PATH every time I
> upgrade the software, when the data are the same. I have a specific naming
> convention not using a db version.
>
>
>
> /alias/data/
>
>
>
> J
>
>
>
> *From:* Josef Šimánek 
> *Sent:* Tuesday, February 25, 2020 1:05 PM
> *To:* Mihalidesová Jana 
> *Cc:* pgsql-general@lists.postgresql.org
> *Subject:* Re: Upgrade to 12.2 using same data directory
>
>
>
> I think that's not possible. What's the reason to reuse old directory?
>
>
>
> út 25. 2. 2020 v 12:59 odesílatel Mihalidesová Jana <
> jana.mihalides...@cetin.cz> napsal:
>
> Hi,
>
>
>
> Is there possibility to upgrade from 10.6 to 12.2 using exact same data
> directory? First problem is with initialing of “new” cluster. Then
> pg_upgrade check fail.
>
>
>
>
>
> ./pg_upgrade -b $OLD/bin -B $NEWd/bin -d $OLD -D $OLD -k -c
>
>
>
>
>
> Thx,
>
> Jana
>
>
>
> Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na
> uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či
> jejich změny jsou společností CETIN a.s. uzavírány v písemné formě nebo v
> podobě a postupem podle příslušných všeobecných podmínek společnosti CETIN
> a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány
> oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí
> jsou uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s
> uznávaným elektronickým podpisem. Podmínky, za nichž CETIN a.s. přistupuje
> k jednání o smlouvě a jakými se řídí, jsou dostupné zde
> .
>
> The content of this message is intended for communication purposes only.
> It does neither represent any contract proposal, nor its amendment or
> acceptance of any potential contract proposal. CETIN a.s. concludes
> contracts or amendments thereto in a written form or in the form and the
> procedure in accordance with relevant general terms and conditions of CETIN
> a.s., if all requirements are agreed. Contracts are concluded by an
> authorized person entitled on the basis of a written authorization.
> Contracts on a future contract are concluded solely in a written form,
> self-signed or signed by means of an advanced electronic signature. The
> conditions under which CETIN a.s. negotiates contracts and under which it
> proceeds are available here
> .
>
>
>
> Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na
> uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či
> jejich změny jsou společností CETIN a.s. uzavírány v písemné formě nebo v
> podobě a postupem podle příslušných všeobecných podmínek společnosti CETIN
> a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány
> oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí
> jsou uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s
> uznávaným elektronickým podpisem. Podmínky, za nichž CETIN a.s. přistupuje
> k jednání o smlouvě a jakými se řídí, jsou dostupné zde
> .
>
> The content of this message is intended for communication purposes only.
> It does neither represent any contract proposal, nor its amendment or
> acceptance of any potential contract proposal. CETIN a.s. concludes
> contracts or amendments thereto in a written form or in the form and the
> procedure in accordance with relevant general terms and conditions of CETIN
> a.s., if all requirements are agreed. Contracts are concluded by an
> authorized person entitled on the basis of a written authorization.
> Contracts on a future contract are concluded solely in a written form,
> self-signed or signed by means of an advanced electronic signature. The
> conditions under which CETIN a.s. negotiates contracts and under which it
> proceeds are available here
> .
>


RE: Upgrade to 12.2 using same data directory

2020-02-25 Thread Mihalidesová Jana
I think it’s strange to change the data directory $PATH every time I upgrade 
the software, when the data are the same. I have a specific naming convention 
not using a db version.

/alias/data/

J

From: Josef Šimánek 
Sent: Tuesday, February 25, 2020 1:05 PM
To: Mihalidesová Jana 
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Upgrade to 12.2 using same data directory

I think that's not possible. What's the reason to reuse old directory?

út 25. 2. 2020 v 12:59 odesílatel Mihalidesová Jana 
mailto:jana.mihalides...@cetin.cz>> napsal:
Hi,

Is there possibility to upgrade from 10.6 to 12.2 using exact same data 
directory? First problem is with initialing of “new” cluster. Then pg_upgrade 
check fail.


./pg_upgrade -b $OLD/bin -B $NEWd/bin -d $OLD -D $OLD -k -c


Thx,
Jana


Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na 
uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či 
jejich změny jsou společností CETIN a.s. uzavírány v písemné formě nebo v 
podobě a postupem podle příslušných všeobecných podmínek společnosti CETIN 
a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány 
oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí jsou 
uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s uznávaným 
elektronickým podpisem. Podmínky, za nichž CETIN a.s. přistupuje k jednání o 
smlouvě a jakými se řídí, jsou dostupné 
zde.

The content of this message is intended for communication purposes only. It 
does neither represent any contract proposal, nor its amendment or acceptance 
of any potential contract proposal. CETIN a.s. concludes contracts or 
amendments thereto in a written form or in the form and the procedure in 
accordance with relevant general terms and conditions of CETIN a.s., if all 
requirements are agreed. Contracts are concluded by an authorized person 
entitled on the basis of a written authorization. Contracts on a future 
contract are concluded solely in a written form, self-signed or signed by means 
of an advanced electronic signature. The conditions under which CETIN a.s. 
negotiates contracts and under which it proceeds are available 
here.


Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na 
uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či 
jejich změny jsou společností CETIN a.s. uzavírány v písemné formě nebo v 
podobě a postupem podle příslušných všeobecných podmínek společnosti CETIN 
a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány 
oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí jsou 
uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s uznávaným 
elektronickým podpisem. Podmínky, za nichž CETIN a.s. přistupuje k jednání o 
smlouvě a jakými se řídí, jsou dostupné 
zde.

The content of this message is intended for communication purposes only. It 
does neither represent any contract proposal, nor its amendment or acceptance 
of any potential contract proposal. CETIN a.s. concludes contracts or 
amendments thereto in a written form or in the form and the procedure in 
accordance with relevant general terms and conditions of CETIN a.s., if all 
requirements are agreed. Contracts are concluded by an authorized person 
entitled on the basis of a written authorization. Contracts on a future 
contract are concluded solely in a written form, self-signed or signed by means 
of an advanced electronic signature. The conditions under which CETIN a.s. 
negotiates contracts and under which it proceeds are available 
here.


Re: Upgrade to 12.2 using same data directory

2020-02-25 Thread Sushant Pawar
Hi Mihalidesová,

It should work with -k option. Can you share the error that you have
received? You can execute pg_upgrade with -c option to identify any
possible manual change required.


Best Regards,

*Sushant Pawar | Database Solution Consultant*

*ASHNIK TECHNOLOGY SOLUTIONS PVT. LTD.*

Skype: sush_531 | T: +91 9769559995 <%2B65%206438%203504> | www.ashnik.com



On Tue, Feb 25, 2020 at 5:35 PM Josef Šimánek 
wrote:

> I think that's not possible. What's the reason to reuse old directory?
>
> út 25. 2. 2020 v 12:59 odesílatel Mihalidesová Jana <
> jana.mihalides...@cetin.cz> napsal:
>
>> Hi,
>>
>>
>>
>> Is there possibility to upgrade from 10.6 to 12.2 using exact same data
>> directory? First problem is with initialing of “new” cluster. Then
>> pg_upgrade check fail.
>>
>>
>>
>>
>>
>> ./pg_upgrade -b $OLD/bin -B $NEWd/bin -d $OLD -D $OLD -k -c
>>
>>
>>
>>
>>
>> Thx,
>>
>> Jana
>>
>>
>> Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh
>> na uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy
>> či jejich změny jsou společností CETIN a.s. uzavírány v písemné formě nebo
>> v podobě a postupem podle příslušných všeobecných podmínek společnosti
>> CETIN a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou
>> uzavírány oprávněnou osobou na základě písemného pověření. Smlouvy o
>> smlouvě budoucí jsou uzavírány výhradně v písemné formě, vlastnoručně
>> podepsané nebo s uznávaným elektronickým podpisem. Podmínky, za nichž CETIN
>> a.s. přistupuje k jednání o smlouvě a jakými se řídí, jsou dostupné zde
>> .
>>
>> The content of this message is intended for communication purposes only.
>> It does neither represent any contract proposal, nor its amendment or
>> acceptance of any potential contract proposal. CETIN a.s. concludes
>> contracts or amendments thereto in a written form or in the form and the
>> procedure in accordance with relevant general terms and conditions of CETIN
>> a.s., if all requirements are agreed. Contracts are concluded by an
>> authorized person entitled on the basis of a written authorization.
>> Contracts on a future contract are concluded solely in a written form,
>> self-signed or signed by means of an advanced electronic signature. The
>> conditions under which CETIN a.s. negotiates contracts and under which it
>> proceeds are available here
>> .
>>
>


Re: Upgrade to 12.2 using same data directory

2020-02-25 Thread Josef Šimánek
I think that's not possible. What's the reason to reuse old directory?

út 25. 2. 2020 v 12:59 odesílatel Mihalidesová Jana <
jana.mihalides...@cetin.cz> napsal:

> Hi,
>
>
>
> Is there possibility to upgrade from 10.6 to 12.2 using exact same data
> directory? First problem is with initialing of “new” cluster. Then
> pg_upgrade check fail.
>
>
>
>
>
> ./pg_upgrade -b $OLD/bin -B $NEWd/bin -d $OLD -D $OLD -k -c
>
>
>
>
>
> Thx,
>
> Jana
>
>
> Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na
> uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či
> jejich změny jsou společností CETIN a.s. uzavírány v písemné formě nebo v
> podobě a postupem podle příslušných všeobecných podmínek společnosti CETIN
> a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány
> oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí
> jsou uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s
> uznávaným elektronickým podpisem. Podmínky, za nichž CETIN a.s. přistupuje
> k jednání o smlouvě a jakými se řídí, jsou dostupné zde
> .
>
> The content of this message is intended for communication purposes only.
> It does neither represent any contract proposal, nor its amendment or
> acceptance of any potential contract proposal. CETIN a.s. concludes
> contracts or amendments thereto in a written form or in the form and the
> procedure in accordance with relevant general terms and conditions of CETIN
> a.s., if all requirements are agreed. Contracts are concluded by an
> authorized person entitled on the basis of a written authorization.
> Contracts on a future contract are concluded solely in a written form,
> self-signed or signed by means of an advanced electronic signature. The
> conditions under which CETIN a.s. negotiates contracts and under which it
> proceeds are available here
> .
>


Upgrade to 12.2 using same data directory

2020-02-25 Thread Mihalidesová Jana
Hi,

Is there possibility to upgrade from 10.6 to 12.2 using exact same data 
directory? First problem is with initialing of "new" cluster. Then pg_upgrade 
check fail.


./pg_upgrade -b $OLD/bin -B $NEWd/bin -d $OLD -D $OLD -k -c


Thx,
Jana


Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na 
uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či 
jejich změny jsou společností CETIN a.s. uzavírány v písemné formě nebo v 
podobě a postupem podle příslušných všeobecných podmínek společnosti CETIN 
a.s., a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány 
oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí jsou 
uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s uznávaným 
elektronickým podpisem. Podmínky, za nichž CETIN a.s. přistupuje k jednání o 
smlouvě a jakými se řídí, jsou dostupné 
zde.

The content of this message is intended for communication purposes only. It 
does neither represent any contract proposal, nor its amendment or acceptance 
of any potential contract proposal. CETIN a.s. concludes contracts or 
amendments thereto in a written form or in the form and the procedure in 
accordance with relevant general terms and conditions of CETIN a.s., if all 
requirements are agreed. Contracts are concluded by an authorized person 
entitled on the basis of a written authorization. Contracts on a future 
contract are concluded solely in a written form, self-signed or signed by means 
of an advanced electronic signature. The conditions under which CETIN a.s. 
negotiates contracts and under which it proceeds are available 
here.


Re: aws sct/dms versus ora2pg

2020-02-25 Thread Gilles Darold
Le 22/02/2020 à 02:07, Ayub M a écrit :
> I would like to get suggestions and feedback on aws sct/dms vs ora2pg
> for an Oracle (on-prem) to PostgreSQL (aws rds) migration project.
>
> One big difference between them I see is the fact that dms supports
> incremental loads from oracle to postgres (dont think ora2pg supports
> that, but I could be wrong). 
>
> Someone who researched or used these tools help list pros and cons of
> each approach and which is highly recommended.


Hi,


I don't know a lot about aws migration but it seems to only be able to
migrate DDL and data, ora2pg can do much more. About the incremental
data migration Ora2Pg has an experimental mode to add the incremental
feature, see DATADIFF in documentation and ora2pg.conf. Unfortunately I
still not have really used this feature but help to test and improve it
is welcome. Although as Ora2Pg is a spare time project I guess that with
the financial means of amazon their product is far better. If you or
someone else give a try to both solution fill free to send feedback.


Regards,

-- 
Gilles Darold
http://www.darold.net/



Re: a proposal for a new functionality: "SELECT * [EXCEPT col1 [,col2]]

2020-02-25 Thread Fabio Ugo Venchiarutti
That makes sense, however for my general use case I'd also like the 
ability to mark some columns as not match-able by `SELECT * FROM 
` and `TABLE `  at table definition without having to 
create dedicated views (think of the way system attributes such as 
tableoid, cmax, cmin ... are handled) .




Typical use case: manual inspection of rows containing an hefty payload 
field and some metadata ones; the payload more often than not isn't the 
bit I'm interested in, but the size of it dominates the output making it 
hard to read (granted - psql has formatting options to handle that, but 
having a sane default at the table level would help a lot).




One may argue that such behaviour breaks the principle of least 
surprise, so all of this should probably be strictly opt-in (and perhaps 
queries could output some hints that such hidden columns exists).




Regards


F



On 25/02/2020 10:30, Josef Šimánek wrote:

Just to mention, similar concept can be found in Google BigQuery.

https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#select-except
https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#select-replace

út 25. 2. 2020 v 11:18 odesílatel Stanislav Motyčka 
mailto:stanislav.moty...@gmail.com>> napsal:


Hello,

Sometimes (for tables with many columns) it would be better and
easier to write "SELECT" statement with clause "EXCEPT":
"SELECT * [EXCEPT col1 [,col2]] FROM ..."

It's easier to write "except" one or two columns from all (*) as to
write names of all columns besides one or two.
What do you thin about it?

Best regards
Stano Motycka



--
Regards

Fabio Ugo Venchiarutti
OSPCFC Network Engineering Dpt.
Ocado Technology

--


Notice: 
This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group.


If you are not the intended recipient, please notify us 
immediately and delete all copies of this message. Please note that it is 
your responsibility to scan this message for viruses.


References to the 
"Ocado Group" are to Ocado Group plc (registered in England and Wales with 
number 7098618) and its subsidiary undertakings (as that expression is 
defined in the Companies Act 2006) from time to time. The registered office 
of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way, 
Hatfield, Hertfordshire, AL10 9UL.





Re: a proposal for a new functionality: "SELECT * [EXCEPT col1 [,col2]]

2020-02-25 Thread Josef Šimánek
Just to mention, similar concept can be found in Google BigQuery.

https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#select-except
https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#select-replace

út 25. 2. 2020 v 11:18 odesílatel Stanislav Motyčka <
stanislav.moty...@gmail.com> napsal:

> Hello,
>
> Sometimes (for tables with many columns) it would be better and easier to
> write "SELECT" statement with clause "EXCEPT":
> "SELECT * [EXCEPT col1 [,col2]] FROM ..."
>
> It's easier to write "except" one or two columns from all (*) as to write
> names of all columns besides one or two.
> What do you thin about it?
>
> Best regards
> Stano Motycka
>
>


a proposal for a new functionality: "SELECT * [EXCEPT col1 [,col2]]

2020-02-25 Thread Stanislav Motyčka
Hello,

Sometimes (for tables with many columns) it would be better and easier
to write "SELECT" statement with clause "EXCEPT":
"SELECT * [EXCEPT col1 [,col2]] FROM ..."

It's easier to write "except" one or two columns from all (*) as to
write names of all columns besides one or two.
What do you thin about it?

Best regards
Stano Motycka



Re: Replication: slave server has 3x size of production server?

2020-02-25 Thread Jehan-Guillaume de Rorthais
On Sat, 22 Feb 2020 19:23:05 +
Edson Richter  wrote:
[...]
> Actually, standby server is sending wals to a backup (barman) server:
> 
> archive_mode = always   # enables archiving; off, on, or always
> (change requires restart) archive_command = 'rsync -e "ssh -2 -C -p 2022" -az
> %p barman@192.168.0.2:/dados/barman/dbcluster/incoming/%f'
> 
> 
> The files are about 7 months old.

Did you check the return code of your archive_command? 

Did you check the log produced by your archive_command and postmaster?

How many files with ".ready" extension in "$PGDATA/pg_xlog/archive_status/"?

Can you confirm there's no missing WAL between the older one and
the newer one in "$PGDATA/pg_xlog" in alphanum order?




Re: Postgres 12.1 : UPPER() in WHERE clause restarts server

2020-02-25 Thread Nick Renders

Hi Tom,

1. we used the EDB installer.

2. turning JIT off did make the problem go away. So I guess this was 
causing the Postgres process to crash all along.


Thanks for the help,

Nick


On 24 Feb 2020, at 16:24, Tom Lane wrote:


"Nick Renders"  writes:
We have set up a new test environment running PostgreSQL v12.2 on 
macOS

10.14 and the issue is still there.


Some nearby threads prompt these two questions:

1. Are you using your own build, or is this from EDB's installer?

2. If the latter, does turning JIT off ("set jit = off") make the
problem go away?

There is as yet no "native" support for --with-llvm on macOS,
ie Apple themselves don't provide sufficient support for that.
EDB seem to have hacked up something that sort of works, but
only sort of.

regards, tom lane





Re: LDAP with TLS is taking more time in Postgresql 11.5

2020-02-25 Thread Laurenz Albe
On Tue, 2020-02-25 at 01:20 +0530, Mani Sankar wrote:
> We have recently upgraded our postgres servers from 9.4 version to 11.5 
> version. Post upgrade we are see delay in authentication. 
> 
> Issue is when we are using ldaptls=1 the authentication takes 1 second or 
> greater than that. But if I disable ldaptls it's getting authenticated within 
> milliseconds.
> 
> But in 9.4 even if I enable ldaptls it's getting authenticated within 
> milliseconds any idea why we are facing the issue?

I would use a packet sniffer like Wireshark to examine the message flow and see 
where the time is spent.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Trigger

2020-02-25 Thread Sonam Sharma
I have a trigger, like many other triggers that fire after

update and checks a field of the OLD set. For some reason this trigger
throw this error:



ERROR: record "old" has no field "ivo_sts_cd" CONTEXT: SQL statement

"UPDATE dbo.T8071_CAI_IVO_HDR SET IVO_STS_CD = 1 where
T616_VBU_NBR=old.T616_VBU_NBR

and T617_FNC_TYP_CD=old.T617_FNC_TYP_CD and
T8071_CAI_IVO_ID=old.T8071_CAI_IVO_ID and

T8071_ADD_DM= old. T8071_ADD_DM and old.ivo_sts_cd != 10 and old.ivo_sts_cd
!= 3" PL/pgSQL

function dbo.t8096_upd_func02() line 9 at SQL statement SQL state: 42703

this is the trigger :

t8096_upd_trig02 AFTER UPDATE ON dbo.t8096_cai_ivo_exc FOR EACH ROW WHEN
(old.exc_rsd_idc IS DISTINCT FROM new.exc_rsd_idc) EXECUTE PROCEDURE
dbo.t8096_upd_func02()



below is the function :



CREATE OR REPLACE FUNCTION pps.t8096_upd_func02()

RETURNS trigger

LANGUAGE plpgsql

AS $function$

DECLARE

exc_count int;

BEGIN

select count(*) into exc_count from pps.T8096_CAI_IVO_EXC WHERE
T616_VBU_NBR=old.T616_VBU_NBR

and T617_FNC_TYP_CD=old.T617_FNC_TYP_CD and
T8071_CAI_IVO_ID=old.T8071_CAI_IVO_ID and T8071_ADD_DM= old.T8071_ADD_DM
and EXC_RSd_IDC = 'N'

AND T8095_EXC_TYP_CD NOT IN(8,9,10,11) ;

if exc_count = 0 then

UPDATE pps.T8071_CAI_IVO_HDR SET IVO_STS_CD = 1 where
T616_VBU_NBR=old.T616_VBU_NBR and T617_FNC_TYP_CD=old.T617_FNC_TYP_CD and

T8071_CAI_IVO_ID=old.T8071_CAI_IVO_ID and T8071_ADD_DM= old. T8071_ADD_DM
and old.ivo_sts_cd != 10 and old.ivo_sts_cd != 3;

end if;

RETURN NEW;

END

$function$;


Can someone please help where I am missing ..


Thanks,

Sonam