Re: pg12 rc1 on CentOS8 depend python2

2019-10-03 Thread keisuke kuroda
> Users of these (now contrib) modules need to have
> postgresql12-plpython3 installed anyway, so it's unlikely you'd be
> breaking anyone's installation.

I agree.
To use these EXTENSION(hstore_plpython,jsonb_plpython, and ltree_plpython),
we need to install plpythonu anyway.
I don't think it would be a problem to move these EXTENSIONs to the
plpython package.

By using the rpm "--nodeps" option,
PostgreSQL 12 could be installed on CentOS8 where python2 is not installed.

# rpm -ivh --nodeps *
   1:postgresql12-libs-12.0-1PGDG.rhel# [
20%]
   2:postgresql12-12.0-1PGDG.rhel8# [
40%]
   3:postgresql12-contrib-12.0-1PGDG.r# [
60%]
   4:postgresql12-server-12.0-1PGDG.rh# [
80%]
   5:postgresql12-docs-12.0-1PGDG.rhel#
[100%]

plpython could be installed in the same way.

# rpm -ivh --nodeps postgresql12-plpython-12.0-1PGDG.rhel8.x86_64.rpm
   1:postgresql12-plpython-12.0-1PGDG.#
[100%]

Of course, "CREATE EXTENSION plpythonu" has caused an ERROR because CentOS8
isn't installed python2.

postgres=# CREATE EXTENSION plpythonu;
ERROR:  could not load library "/usr/pgsql-12/lib/plpython2.so":
libpython2.7.so.1.0: cannot open shared object file: No such file or
directory postgres=#

"CREATE EXTENSION hstore_plpython2u" has caused an ERROR because required
"plpython2u".

postgres=# CREATE EXTENSION hstore_plpython2u;
ERROR:  required extension "plpython2u" is not installed
HINT:  Use CREATE EXTENSION ... CASCADE to install required extensions too.


By the way, "python2-libs is needed" error has occurred when
"postgresql12-12.0-1PGDG.rhel8.x86_64.rpm" and
"postgresql12-libs-12.0-1PGDG.rhel8.x86_64.rpm" install.

# rpm -ivh postgresql12-12.0-1PGDG.rhel8.x86_64.rpm
postgresql12-libs-12.0-1PGDG.rhel8.x86_64.rpm
error: Failed dependencies:
python2-libs is needed by postgresql12-12rc1-1PGDG.rhel8.x86_64

In this case, I think that python2-libs is not necessary.


Best Regards.
Keisuke Kuroda


Re: PMChildFlags array

2019-10-03 Thread Alvaro Herrera
On 2019-Oct-03, bhargav kamineni wrote:

> bhargav kamineni  writes:
> > Postgres was rejecting a bunch of connections from a user who is having a
> > connection limit set. that was the the FATAL error that i could see in log
> > file.
> >  FATAL,53300,"too many connections for role ""user_app"""
> 
> > db=\du user_app
> >List of roles
> >   Role name   |  Attributes   | Member of
> > --+---+
> >  user_app | No inheritance   +| {application_role}
> >   | 100 connections  +|
> >   | Password valid until infinity |

Was the machine overloaded at the time the problem occurred?

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: PMChildFlags array

2019-10-03 Thread Tom Lane
bhargav kamineni  writes:
>> What was the database doing just before the FATAL line?

> Postgres was rejecting a bunch of connections from a user who is having a
> connection limit set. that was the the FATAL error that i could see in log
> file.
>  FATAL,53300,"too many connections for role ""user_app"""

So ... how many is "a bunch"?

Looking at the code, it seems like it'd be possible for a sufficiently
aggressive spawner of incoming connections to reach the
MaxLivePostmasterChildren limit.  While the postmaster would correctly
reject additional connection attempts after that, what it would not do
is ensure that any child slots are left for new parallel worker processes.
So we could hypothesize that the error you're seeing in the log is from
failure to spawn a parallel worker process, due to being out of child
slots.

However, given that max_connections = 500, MaxLivePostmasterChildren()
would be 1000-plus.  This would mean that reaching this condition would
require *at least* 500 concurrent connection-attempts-that-haven't-yet-
been-rejected, maybe well more than that if you didn't have close to
500 legitimately open sessions.  That seems like a lot, enough to suggest
that you've got some pretty serious bug in your client-side logic.

Anyway, I think it's clearly a bug that canAcceptConnections() thinks the
number of acceptable connections is identical to the number of allowed
child processes; it needs to be less, by the number of background
processes we want to support.  But it seems like a darn hard-to-hit bug,
so I'm not quite sure that that explains your observation.

regards, tom lane




Re: BitmapAnd on correlated column?

2019-10-03 Thread greigwise
yes.  where and is a typo.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: BitmapAnd on correlated column?

2019-10-03 Thread Rob Sargent



On 10/3/19 3:22 PM, greigwise wrote:

I'm running the following query on Postgres version 10.8:

SELECT  count(*) FROM test_table WHERE and id_column_1 IN (9954, 9690, 9689,
9688) AND
id_column_2 IN (75328, 51448, 48060, 48065, 51803, 51449, 51802, 48064,

-
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





Is "where and" just a typo?





Re: PG12

2019-10-03 Thread Magnus Hagander
On Thu, Oct 3, 2019 at 6:31 PM Adrian Klaver 
wrote:

> On 10/3/19 9:27 AM, Igor Neyman wrote:
> > Main page (https://www.postgresql.org/) announces new release, but
> > Downloads for Windows page
> > (https://www.postgresql.org/download/windows/) doesn’t list PG12.
> >
> > Any clarification?
>
> It is available:
>
> https://www.enterprisedb.com/downloads/postgres-postgresql-downloads
>
> Just looks like the community page has not been updated to reflect
>

That's the list of platforms that EnterpriseDB have tested their installers
on. So we obviously have to wait for EDB to provide updated lists there --
but until then, you can probably expect it to work just fine.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: PMChildFlags array

2019-10-03 Thread bhargav kamineni
bhargav kamineni  writes:
> Postgres was rejecting a bunch of connections from a user who is having a
> connection limit set. that was the the FATAL error that i could see in log
> file.
>  FATAL,53300,"too many connections for role ""user_app"""

> db=\du user_app
>List of roles
>   Role name   |  Attributes   | Member of
> --+---+
>  user_app | No inheritance   +| {application_role}
>   | 100 connections  +|
>   | Password valid until infinity |

>Hm, what's the overall max_connections limit?  (I'm wondering
in particular if it's more or less than 100.)

its set to 500;
show max_connections ;
 max_connections
-
 500


On Thu, 3 Oct 2019 at 22:52, Tom Lane  wrote:

> bhargav kamineni  writes:
> > Postgres was rejecting a bunch of connections from a user who is having a
> > connection limit set. that was the the FATAL error that i could see in
> log
> > file.
> >  FATAL,53300,"too many connections for role ""user_app"""
>
> > db=\du user_app
> >List of roles
> >   Role name   |  Attributes   | Member of
> > --+---+
> >  user_app | No inheritance   +| {application_role}
> >   | 100 connections  +|
> >   | Password valid until infinity |
>
> Hm, what's the overall max_connections limit?  (I'm wondering
> in particular if it's more or less than 100.)
>
> regards, tom lane
>


Re: PMChildFlags array

2019-10-03 Thread Tom Lane
bhargav kamineni  writes:
> Postgres was rejecting a bunch of connections from a user who is having a
> connection limit set. that was the the FATAL error that i could see in log
> file.
>  FATAL,53300,"too many connections for role ""user_app"""

> db=\du user_app
>List of roles
>   Role name   |  Attributes   | Member of
> --+---+
>  user_app | No inheritance   +| {application_role}
>   | 100 connections  +|
>   | Password valid until infinity |

Hm, what's the overall max_connections limit?  (I'm wondering
in particular if it's more or less than 100.)

regards, tom lane




Re: PG12

2019-10-03 Thread Adrian Klaver

On 10/3/19 9:27 AM, Igor Neyman wrote:
Main page (https://www.postgresql.org/) announces new release, but 
Downloads for Windows page 
(https://www.postgresql.org/download/windows/) doesn’t list PG12.


Any clarification?


It is available:

https://www.enterprisedb.com/downloads/postgres-postgresql-downloads

Just looks like the community page has not been updated to reflect.



Regards,

Igor Neyman




--
Adrian Klaver
adrian.kla...@aklaver.com




RE: PG12

2019-10-03 Thread Igor Neyman
From: Igor Neyman [mailto:iney...@perceptron.com]
Sent: Thursday, October 03, 2019 12:27 PM
To: pgsql-general@lists.postgresql.org
Subject: PG12

Main page (https://www.postgresql.org/) announces new release, but Downloads 
for Windows page (https://www.postgresql.org/download/windows/) doesn't list 
PG12.
Any clarification?

Regards,
Igor Neyman

Ok, sorry for noise. EnterpriseDB has it.

Igor Neyman


PG12

2019-10-03 Thread Igor Neyman
Main page (https://www.postgresql.org/) announces new release, but Downloads 
for Windows page (https://www.postgresql.org/download/windows/) doesn't list 
PG12.
Any clarification?

Regards,
Igor Neyman



Re: PMChildFlags array

2019-10-03 Thread bhargav kamineni
> Hi,
>
> Observed below errors  in logfile
>
> 2019-09-20 02:00:24.504 UTC,,,99779,,5d73303a.185c3,73,,2019-09-07
> 04:21:14 UTC,,0,FATAL,XX000,"no free slots in PMChildFlags
array",""
> 2019-09-20 02:00:24.505 UTC,,,109949,,5d8432b8.1ad7d,1,,2019-09-20
> 02:00:24 UTC,,0,ERROR,58P01,"could not open shared memory segment
> ""/PostgreSQL.2520932"": No such file or directory",""
> 2019-09-20 02:00:24.505 UTC,,,109950,,5d8432b8.1ad7e,1,,2019-09-20
> 02:00:24 UTC,,0,ERROR,58P01,"could not open shared memory segment
> ""/PostgreSQL.2520932"": No such file or directory",""
>

>Postgres version?

PostgreSQL 10.8

>OS and version?

NAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"

What was the database doing just before the FATAL line?

Postgres was rejecting a bunch of connections from a user who is having a
connection limit set. that was the the FATAL error that i could see in log
file.
 FATAL,53300,"too many connections for role ""user_app"""

db=\du user_app
   List of roles
  Role name   |  Attributes   | Member of
--+---+
 user_app | No inheritance   +| {application_role}
  | 100 connections  +|
  | Password valid until infinity |

> what could be the possible reasons for this to occur and is there any
> chance of database corruption after this event ?

The source(backend/storage/ipc/pmsignal.c ) says:

"/* Out of slots ... should never happen, else postmaster.c messed up */
 elog(FATAL, "no free slots in PMChildFlags array");
"

Someone else will need to comment on what 'messed up' could be

On Thu, 3 Oct 2019 at 18:56, Adrian Klaver 
wrote:

> On 10/3/19 3:57 AM, bhargav kamineni wrote:
> > Hi,
> >
> > Observed below errors  in logfile
> >
> > 2019-09-20 02:00:24.504 UTC,,,99779,,5d73303a.185c3,73,,2019-09-07
> > 04:21:14 UTC,,0,FATAL,XX000,"no free slots in PMChildFlags
> array",""
> > 2019-09-20 02:00:24.505 UTC,,,109949,,5d8432b8.1ad7d,1,,2019-09-20
> > 02:00:24 UTC,,0,ERROR,58P01,"could not open shared memory segment
> > ""/PostgreSQL.2520932"": No such file or directory",""
> > 2019-09-20 02:00:24.505 UTC,,,109950,,5d8432b8.1ad7e,1,,2019-09-20
> > 02:00:24 UTC,,0,ERROR,58P01,"could not open shared memory segment
> > ""/PostgreSQL.2520932"": No such file or directory",""
> >
>
> Postgres version?
>
> OS and version?
>
> What was the database doing just before the FATAL line?
>
> > what could be the possible reasons for this to occur and is there any
> > chance of database corruption after this event ?
>
> The source(backend/storage/ipc/pmsignal.c ) says:
>
> "/* Out of slots ... should never happen, else postmaster.c messed up */
>  elog(FATAL, "no free slots in PMChildFlags array");
> "
>
> Someone else will need to comment on what 'messed up' could be.
>
> >
> >
> > Regards,
> > Bhargav
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Urgent :: Postgresql streaming replication issue - sync mode

2019-10-03 Thread Laurenz Albe
On Fri, 2019-10-04 at 00:34 +1000, Jason Wang wrote:
> I read this 
> https://www.2ndquadrant.com/en/blog/evolution-fault-tolerance-postgresql-synchronous-commit/
> 
> But don't see why your primary would have more records than the
> standby? 
> 
> If killall was issued before commit returned, that means the
> transaction wasn't completed so yes you would lose records after last
> commit but that's expected; if commit was returned both primary and
> standby should have the transaction.
> 
> Are you sure in your case you end up with primary and standby with
> different records from a single transaction?

PostgreSQL synchronous streaming replicatoin doesn't use anything like
two-phase commit.

1. It commits the transaction locally first, which generates WAL.
2. The WAL gets replicated.
3. As soon as the standby reports success, COMMIT returns.

If there is a failure after the first step completed, the
transaction will be committed locally, but not on the standby.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Urgent :: Postgresql streaming replication issue - sync mode

2019-10-03 Thread Jason Wang
I read this
https://www.2ndquadrant.com/en/blog/evolution-fault-tolerance-postgresql-synchronous-commit/

But don't see why your primary would have more records than the standby?

If killall was issued before commit returned, that means the transaction
wasn't completed so yes you would lose records after last commit but that's
expected; if commit was returned both primary and standby should have the
transaction.

Are you sure in your case you end up with primary and standby with
different records from a single transaction?


On Thu, 3 Oct 2019, 9:41 pm Laurenz Albe,  wrote:

> On Wed, 2019-10-02 at 23:58 +0530, Shital A wrote:
> > We are seeing a strange issue with postgresql streaming application
> > in sync mode.
> >
> > We are using postgresql 9.6. Old version because of some specific
> > requirements.  We have setup cluster with master-standby using
> > pacemaker.
> >
> > When we kill master using killall -9 postgres. The failed primary has
> > few records extra than standby node. We have done setup with
> > synchronous_commit = remote_apply and have set
> > synchronous_standby_names=server_name.
> >
> > As the failed primary is having more data, How is it possible that
> > primary is committing transaction before they were applied on standby
> > with synchronous_commit=remote_apply?
> >
> >
> > Please share if you have any thoughts. Are we missing any config ?
>
> This is to be expected.
>
> The transaction will be committed on the primary, then on the standby,
> and COMMIT will only return once the standby reports success.
>
> But the transacaction still has to be committed on the primary first.
>
> If the standby sis promoted while COMMIT is waiting for the standby,
> you can end up with the transaction committed on the primary,
> but not yet committed on the standby.
>
> You should use "pg_rewind" on the failed primary if you want to use
> it as new standby for the promoted server.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>
>
>


Re: psql \copy hanging

2019-10-03 Thread Adrian Klaver

On 10/3/19 7:13 AM, Arnaud L. wrote:

Le 03/10/2019 à 15:54, Adrian Klaver a écrit :

On 10/2/19 11:51 PM, Arnaud L. wrote:
Well, this problem is still bugging me, and this time I've tried with 
a local file. Unfortunately, it did not help.
To further rule out filesystem problems, I first took care to delete 
the target files before copying to it, but it did not help either.


So now I'm quite confident that the problem is either psql or even 
postgresql itself.


Does anyone know of anything I could try to try to fix or debug this ?


Going back to the original thread I noticed it was not specified what
program was being used to run the script in the overnight session.

So what is being used to run the script overnight?



Yes, sorry for having lost the original thread, my mailbox has a quite 
stupid automatic purge schedule...


The script is run in a windows batch file.
Basically, export.bat contains :

SET PGUSER=myuser
SET PGPASSWORD=mypwd
SET PGCLIENTENCODING=UTF8
SET MYPGSERVER=myserverurl
SET MYPGDB=mydatabase
psql -h %MYPGSERVER% -a -f myscript.sql %MYPGDB%


I may have missed it before, but where is the Postgres server located?

Given that this seems to be some sort of resource issue and that the 
below contains commands that are not dependent on each other, have you 
thought of splitting myscript.sql into two scripts? If nothing else the 
n(total line)-problem line in one script and the problem line in 
another. That would help nail down whether that line is truly the 
problem or if it is an interaction with running the other 50+ lines.




And myscript.sql contains :
\copy (SELECT * FROM view1) TO '\\server\share\view1.txt'
\copy (SELECT * FROM view2) TO '\\server\share\view2.txt'
...
etc with ~60 views

Today, I've update the problematic \copy line to be :
COPY (SELECT * FROM view) TO STDOUT \g '\\server\share\view.txt'

I'll keep you informed (even though a successfull run is not a guarantee 
of success, because the original script did sometimes work).


Regards
--
Arnaud




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: psql \copy hanging

2019-10-03 Thread Arnaud L.

Le 03/10/2019 à 15:54, Adrian Klaver a écrit :

On 10/2/19 11:51 PM, Arnaud L. wrote:
Well, this problem is still bugging me, and this time I've tried with a 
local file. Unfortunately, it did not help.
To further rule out filesystem problems, I first took care to delete the 
target files before copying to it, but it did not help either.


So now I'm quite confident that the problem is either psql or even 
postgresql itself.


Does anyone know of anything I could try to try to fix or debug this ?


Going back to the original thread I noticed it was not specified what
program was being used to run the script in the overnight session.

So what is being used to run the script overnight?



Yes, sorry for having lost the original thread, my mailbox has a quite 
stupid automatic purge schedule...


The script is run in a windows batch file.
Basically, export.bat contains :

SET PGUSER=myuser
SET PGPASSWORD=mypwd
SET PGCLIENTENCODING=UTF8
SET MYPGSERVER=myserverurl
SET MYPGDB=mydatabase
psql -h %MYPGSERVER% -a -f myscript.sql %MYPGDB%

And myscript.sql contains :
\copy (SELECT * FROM view1) TO '\\server\share\view1.txt'
\copy (SELECT * FROM view2) TO '\\server\share\view2.txt'
...
etc with ~60 views

Today, I've update the problematic \copy line to be :
COPY (SELECT * FROM view) TO STDOUT \g '\\server\share\view.txt'

I'll keep you informed (even though a successfull run is not a guarantee 
of success, because the original script did sometimes work).


Regards
--
Arnaud




Re: psql \copy hanging

2019-10-03 Thread Adrian Klaver

On 10/2/19 11:51 PM, Arnaud L. wrote:

Hi list,

Le 28/08/2019 à 09:43, Luca Ferrari a écrit :
I don't want to be pedantic, but I would have tried with a single 
change at a time.
And my bet is: the local file would do the trick (i.e., it is a weird 
share problem).


Well, this problem is still bugging me, and this time I've tried with a 
local file. Unfortunately, it did not help.
To further rule out filesystem problems, I first took care to delete the 
target files before copying to it, but it did not help either.


So now I'm quite confident that the problem is either psql or even 
postgresql itself.


Does anyone know of anything I could try to try to fix or debug this ?


Going back to the original thread I noticed it was not specified what 
program was being used to run the script in the overnight session.


So what is being used to run the script overnight?



Thanks a lot for your help!

Regards
--
Arnaud







--
Adrian Klaver
adrian.kla...@aklaver.com




Re: performance of pg_upgrade "Copying user relation files"

2019-10-03 Thread Adrian Klaver

On 10/2/19 5:47 PM, Adrian Klaver wrote:

On 10/2/19 4:58 PM, Glenn Pierce wrote:
Please reply to list also.
Ccing list.



On Thu, 3 Oct 2019, 00:11 Adrian Klaver, > wrote:


    On 10/2/19 3:30 PM, Glenn Pierce wrote:
 > I have been trying to upgrade postgres 9.5 to 9.6 with pg_upgrade
 >
 > Everything seems to start fine but when pg_upgrade gets to 
"Copying

 > user relation files"
 > It takes ages to complete copying.

    This is going to need more information:

    1) What is the pg_upgrade command you are using?


I was using pg_upgrade from the installed postgres 9.6

/usr/pgsql-9.6/bin/pg_upgrade --old-bindir=/usr/pgsql-9.5/bin/ 
--new-bindir=/usr/pgsql-9.6/bin/ 
--old-datadir=/var/lib/pgsql/9.5/data/ 
--new-datadir=/var/lib/pgsql/9.6/data/


What happens if you use the -j option:

https://www.postgresql.org/docs/9.6/pgupgrade.html
-j
--jobs

number of simultaneous processes or threads to use

"The --jobs option allows multiple CPU cores to be used for 
copying/linking of files and to dump and reload database schemas in 
parallel; a good place to start is the maximum of the number of CPU 
cores and tablespaces. This option can dramatically reduce the time to 
upgrade a multi-database server running on a multiprocessor machine."





    I was following the article from

https://medium.com › postgresql-upg...
Web results
PostgreSQL upgrade on CentOS - Dzmitry Plashchynski - Medium



    2) Where are you upgrading from/to?


Trying to upgrade from 9.5 to 9.6
Same machine with both versions installed.


    3) What OS, version?

    Centos 7.4


Thanks


 >
 > ie
 > Each file like
 >
 > /var/lib/pgsql/9.5/data/base/24602/25140
 > is 1G and taking ~5-10 minutes to copyand that directory is 
223G of

 > about 990 files
 >
 > After a couple of hours running pg_upgrade I had to give up.
 >
 > looking a iotop I was seeing
 >
 > 99% IO load caused by [md0_raid1]
 >
 > and only seeing pg_upgrade go to 99% every 3-4 seconds
 >
 > I had no other processes using io or high cpu.
 >
 > Has anyone got any pointers of what could be the issue ?
 >
 > Ps running cp on /var/lib/pgsql/9.5/data/base/24602/25140 to my 
home

 > directory is instantaneous
 >
 > System is Centos 7
 >
 > Thanks
 >
 >


    --     Adrian Klaver
    adrian.kla...@aklaver.com 







--
Adrian Klaver
adrian.kla...@aklaver.com




Re: PMChildFlags array

2019-10-03 Thread Adrian Klaver

On 10/3/19 3:57 AM, bhargav kamineni wrote:

Hi,

Observed below errors  in logfile

2019-09-20 02:00:24.504 UTC,,,99779,,5d73303a.185c3,73,,2019-09-07 
04:21:14 UTC,,0,FATAL,XX000,"no free slots in PMChildFlags array",""
2019-09-20 02:00:24.505 UTC,,,109949,,5d8432b8.1ad7d,1,,2019-09-20 
02:00:24 UTC,,0,ERROR,58P01,"could not open shared memory segment 
""/PostgreSQL.2520932"": No such file or directory",""
2019-09-20 02:00:24.505 UTC,,,109950,,5d8432b8.1ad7e,1,,2019-09-20 
02:00:24 UTC,,0,ERROR,58P01,"could not open shared memory segment 
""/PostgreSQL.2520932"": No such file or directory",""




Postgres version?

OS and version?

What was the database doing just before the FATAL line?

what could be the possible reasons for this to occur and is there any 
chance of database corruption after this event ?


The source(backend/storage/ipc/pmsignal.c ) says:

"/* Out of slots ... should never happen, else postmaster.c messed up */
elog(FATAL, "no free slots in PMChildFlags array");
"

Someone else will need to comment on what 'messed up' could be.




Regards,
Bhargav




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: PMChildFlags array

2019-10-03 Thread bhargav kamineni
Any suggestions on this ?

On Thu, 3 Oct 2019 at 16:27, bhargav kamineni 
wrote:

> Hi,
>
> Observed below errors  in logfile
>
> 2019-09-20 02:00:24.504 UTC,,,99779,,5d73303a.185c3,73,,2019-09-07
> 04:21:14 UTC,,0,FATAL,XX000,"no free slots in PMChildFlags array",""
> 2019-09-20 02:00:24.505 UTC,,,109949,,5d8432b8.1ad7d,1,,2019-09-20
> 02:00:24 UTC,,0,ERROR,58P01,"could not open shared memory segment
> ""/PostgreSQL.2520932"": No such file or directory",""
> 2019-09-20 02:00:24.505 UTC,,,109950,,5d8432b8.1ad7e,1,,2019-09-20
> 02:00:24 UTC,,0,ERROR,58P01,"could not open shared memory segment
> ""/PostgreSQL.2520932"": No such file or directory",""
>
>
> what could be the possible reasons for this to occur and is there any
> chance of database corruption after this event ?
>
>
> Regards,
> Bhargav
>
>


Re: Urgent :: Postgresql streaming replication issue - sync mode

2019-10-03 Thread Laurenz Albe
On Wed, 2019-10-02 at 23:58 +0530, Shital A wrote:
> We are seeing a strange issue with postgresql streaming application
> in sync mode. 
> 
> We are using postgresql 9.6. Old version because of some specific
> requirements.  We have setup cluster with master-standby using
> pacemaker. 
> 
> When we kill master using killall -9 postgres. The failed primary has
> few records extra than standby node. We have done setup with
> synchronous_commit = remote_apply and have set
> synchronous_standby_names=server_name. 
> 
> As the failed primary is having more data, How is it possible that
> primary is committing transaction before they were applied on standby
> with synchronous_commit=remote_apply?
> 
> 
> Please share if you have any thoughts. Are we missing any config ?

This is to be expected.

The transaction will be committed on the primary, then on the standby,
and COMMIT will only return once the standby reports success.

But the transacaction still has to be committed on the primary first.

If the standby sis promoted while COMMIT is waiting for the standby,
you can end up with the transaction committed on the primary,
but not yet committed on the standby.

You should use "pg_rewind" on the failed primary if you want to use
it as new standby for the promoted server.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: partitions vs indexes

2019-10-03 Thread Laurenz Albe
On Wed, 2019-10-02 at 22:09 +0200, Enrico Thierbach wrote:
> would like to convert a table with a primary key into a partitioned 
> setup by a column which is not part of the primary key. Also, a
> column might hold a referenece to a parent row. So this is my current
> table setup, slimmed down:
> 
> CREATE TYPE statuses AS ENUM ('ready', ‘processing’, ‘done’);
> 
> CREATE TABLE mytable (
>   id  BIGSERIAL PRIMARY KEY NOT NULL, 
>   parent_id   BIGINT REFERENCES mytable(id) ON DELETE CASCADE,  
>   status statuses DEFAULT 'ready'
> );
> Since entries in the table are often frequented when status is not
> ‘done’ I would like to partition by state.

Have you considered using partial indexes?

CREATE INDEX . WHERE status <> 'done'.

Such indexes will be smaller, and any query where "status" is in
the WHERE clause can use the index.

If partitoining is really what you need, you could create the primary
key on the partitions and not on the partitioned table.

That won't guarantee you global uniqueness, but you could rely on
a sequence to do that.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Advice for geographically dispersed multi master

2019-10-03 Thread Laurenz Albe
On Thu, 2019-10-03 at 19:05 +1000, Nikolai Lusan wrote:
> So I am looking to move my current setup of 3 separate, individual,
> postgres 11 servers into a multi master cluster.
> 
> I have read the various replication and clustering documentation for
> postgresql 11, and it looks like what I want is "Synchronous 
> Multimaster Replication". The organisation I am doing this for does 
> not have the money to throw at a commercial solution like BRD

Don't do it then.

As far as I know, there are only closed source solutions for this.

95% of all requests for multi-master replication come from people
who know little about databases and just want "a couple of databases
that all hold the same data and magically reconcile with each other".

Any multi-master solution will require the application to be
written with special awareness of a multi-master database.

For example, there will have to be special considerations with
auto-generated primary keys, and you will have to avoid certain
operations altogether.

Imagine this situation:

  DB 1DB 2

 DELETE FROM tabUPDATE tab
 WHERE id = 42  SET id = 1 WHERE id = 42

If these statements are executed at the same time on two
databases and then replicated, DB 2 will end up with one row
more in the table than DB 1.


Try to find a solution with a single master database.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





PMChildFlags array

2019-10-03 Thread bhargav kamineni
Hi,

Observed below errors  in logfile

2019-09-20 02:00:24.504 UTC,,,99779,,5d73303a.185c3,73,,2019-09-07 04:21:14
UTC,,0,FATAL,XX000,"no free slots in PMChildFlags array",""
2019-09-20 02:00:24.505 UTC,,,109949,,5d8432b8.1ad7d,1,,2019-09-20 02:00:24
UTC,,0,ERROR,58P01,"could not open shared memory segment
""/PostgreSQL.2520932"": No such file or directory",""
2019-09-20 02:00:24.505 UTC,,,109950,,5d8432b8.1ad7e,1,,2019-09-20 02:00:24
UTC,,0,ERROR,58P01,"could not open shared memory segment
""/PostgreSQL.2520932"": No such file or directory",""


what could be the possible reasons for this to occur and is there any
chance of database corruption after this event ?


Regards,
Bhargav


Re: Advice for geographically dispersed multi master

2019-10-03 Thread Thomas Kellerer

Nikolai Lusan schrieb am 03.10.2019 um 11:05:

I have read the various replication and clustering documentation for
postgresql 11, and it looks like what I want is "Synchronous Multimaster
Replication". The organisation I am doing this for does not have the money
to throw at a commercial solution like BRD, but from my reading it looks
like PG v12 may have a method to make such a solution easier to achieve
with stock PG.


You might be interested in this blog post:

https://info.crunchydata.com/blog/a-guide-to-building-an-active-active-postgresql-cluster






Re: partitions vs indexes

2019-10-03 Thread Enrico Thierbach

On 2 Oct 2019, at 22:16, Michael Lewis wrote:


"I would like to convert a table with a primary key into a partitioned
setup by a column which is not part of the primary key"

That isn't possible. The partition key must be contained by the 
primary
key. That is, the primary key could be site_id, id and you can create 
hash

partition on id or site_id but not created_on.

You could drop primary key and foreign keys and implement them via 
trigger

functions as described in this blog series, but it seems questionable-
https://www.depesz.com/2018/10/31/foreign-key-to-partitioned-table-part-2/

I do not assume the restriction would be dropped in future releases. I
don't know that scanning all the partitions to figure out whether the
primary key is violated would be advisable. Which is what the trigger
functions described in the blog post has to do, right?

It might be noteworthy that partitioning with more than 10-100 
partitions

is MUCH faster in PG12 than PG11 (up to 4-8 thousand partitions) from
testing shared by those working on that code.






Michael, thank you for your response. I think I now grasp the idea: if 
there is a uniqueness constraint, then the database would rather not 
visit all partitions to check for constraint violation, but want to 
identify the single partition that might fail that; hence any 
partitioning value must be a subset of or be identical to the uniqueness 
constraint.


I get that this makes sense if you have *many* partitions; however, I 
basically want to end up with two partitions, “hot” and “cold”. 
A row’s lifetime starts in a hot partition, and, after being 
processed, moves into the cold partition.


Most of the work actually happens in the hot partition, so I think 
having this as small as possible is probably helpful. For numbers: the 
hot partition would tyically contain ~1 rows, the cold partition, on 
the other hand, will have 10s of millions. At the same time I still want 
to be able to look up a row by its id in the root relation, not in the 
concrete partitions. Having the database validate a uniqueness 
constraint in two tables instead of in one would be a worthwhile 
sacrifice for me.


Having said that I just realized I could probably reach my goal by 
setting up explicit hot and cold tables, move rows around manually 
whenever their “hotness” changes, and set up a view which combines 
both tables into a single relation. I would only have to drop all 
explicit FK references from the schema. A downside, certainly, but one 
that I could live with.


Best,
/eno


--
me on github: http://github.com/radiospiel


Advice for geographically dispersed multi master

2019-10-03 Thread Nikolai Lusan
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi,

/*
   Disclaimer:
   I am primarily a systems administrator with some database and DBA
   history. Although these days I would normally leave this kind of thing
   to a seasoned DBA this is for my personal use, and the use of a non-
   profit organisation that I am doing some volunteer support work for. 
*/


So I am looking to move my current setup of 3 separate, individual,
postgres 11 servers into a multi master cluster. I have done this with
OpenLDAP, which is used for various purposes, but want to extend this to
cover my DB installs.

I have read the various replication and clustering documentation for
postgresql 11, and it looks like what I want is "Synchronous Multimaster
Replication". The organisation I am doing this for does not have the money
to throw at a commercial solution like BRD, but from my reading it looks
like PG v12 may have a method to make such a solution easier to achieve
with stock PG.

The three servers I am using are all in different DC's (2 VPS and one bare
metal install). I would like to have all my servers containing all the PG
databases on each server for various reasons (to supplement backups, and
make recovery faster/easier. To allow applications on different servers
access to various databases locally (rather than transmitting all SQL
requests over network). The list goes on, but I don't want to bore people
more than I already have.). There is also the possibility that a subset of
these databases may be replicated to a fourth, or even fifth location
depending on needs that arise.

So:
   1) Should I just wait for PG v12 to come out of RC status and try to set
  it up then?

   2) Is there are way to reconfigure my current servers to replicate the
  databases to the other servers?

   3) Is there a better way to do this full stop?

Any help/advice is greatly welcomed, and appreciated.

- -- 
Nikolai Lusan 
-BEGIN PGP SIGNATURE-

iQIzBAEBCgAdFiEEVfd4GW6z4nsBxdLo4ZaDRV2VL6QFAl2VucEACgkQ4ZaDRV2V
L6TCvhAAhBGF46Rl+1vFLK8oafG1kiURbRTtuZVHQO26Eb0pi4eceClhmWXx39aU
8wl8XVjiOcZyQD8/yjX9jI0jT13p2C9eqcLQeDGUXc3vUYC/iuF8o3pQV3qwZTRL
+FyRL4rjgtjfDYh4nVNd+3dowwdpsE8G1K7NYL2uZvS3CYj4YkdwoM24hBHBe2fU
PdLYOOqYKDJwQMVtbxexgd+Ig0+y31IVLK7c/2VavnKvRjN5B4U+BfiBkaofyTyZ
KNhyUxxL65czH6ThDHcX4r3HdHPHy9LffEwMcndn7w/cFtjNcj+XxRcCS/WiYR6u
Y4GkCutoEZW4KdX8/GyCLIoH7PgToD3PV1t5iiaXQm+SfDKopdVDjBNaVCKMuUL4
weQ667MoJRbV1MA1ASQhx28RV8UgcpWjBXsyXimpQ610sLNVqDxbhadsYJyaWjoG
y95y/3ZAXMGx7xjgfu67M52wzEN5wHgzBMA99gF7UbIHnXkw3Xe1v0C1XrORRkuT
vyIx0MyYxyZiJ2g2zxG1YuhW1AorWNWGrricJNeK9lLdcvPx7svTTZshGjC4GmWZ
vGVqmQcxJDY19AtbwsevKsp1FO4dGqh9LTQsiOrX47WB7BckUzBgsz0uU7HEFuqh
/GuEz131l5ddLBoEhdu2TCL3zWudnXj5NKeklyw9BBXPTslkH1s=
=7Rs2
-END PGP SIGNATURE-