Re: postgresql-10.3 on unbuntu-17.10 - how??

2018-03-20 Thread Stuart McGraw

On 03/20/2018 09:46 AM, Adrian Klaver wrote:

On 03/20/2018 08:23 AM, Stuart McGraw wrote:

I recently installed Ubuntu-17.10 and then discovered that
Postgresql from the Pgdg repos is only supported on Ubuntu LTS
releases (eg 16.04).  However info on the internet said pg-10
could be installed from Pgdg zesty repo, which with some package
version conflicts, I was able to do so and have a functional
pg-10.1 install.

However I need to upgrade to 10.3 and the normal "apt update;
apt upgrade" does not offer 10.3.  Also, doing a fresh install
still installs 10.1.


https://wiki.postgresql.org/wiki/Apt
"2018-01-17: Ubuntu zesty (17.04) is unsupported now, Ubuntu removed it from their 
mirrors "



Is Pgdg 10.3 even available for ubuntu 17.10?  How the heck does
one upgrade to it?


18.04 LTS (Bionic Beaver)?:

http://apt.postgresql.org/pub/repos/apt/dists/bionic-pgdg/


I tried doing a full reinstall of Ubuntu-17.10 followed by a postgresql
install from the bionic repo (instead of installing 10.1 per above and
trying to upgrade)

  # apt-get -qy install postgresql postgresql-client postgresql-contrib \
  postgresql-doc pgadmin3 postgresql-server-dev-10 libpq-dev
  Reading package lists...
  Building dependency tree...
  Reading state information...
   Some packages could not be installed. This may mean that you have
  requested an impossible situation or if you are using the unstable
  distribution that some required packages have not yet been created
  or been moved out of Incoming.
  The following information may help to resolve the situation:

  The following packages have unmet dependencies:
   libpq-dev : Depends: libpq5 (= 10.3-1.pgdg18.04+1) but it is not going to be 
installed
   pgadmin3 : Depends: libgcrypt20 (>= 1.8.0) but 1.7.8-2ubuntu1 is to be 
installed
  Depends: libpq5 (>= 8.4~) but it is not going to be installed
  Recommends: pgagent but it is not going to be installed
   postgresql : Depends: postgresql-10 but it is not going to be installed
   postgresql-client : Depends: postgresql-client-10
   postgresql-contrib : Depends: postgresql-contrib-10
  E: Unable to correct problems, you have held broken packages.

Is there any reason now not to conclude that the 10.3 bionic version is
simply incompatible with Ubuntu-17.10 (at least without a lot more package
wrangling chops than I have)?

One can install postgresql-10.1 but one cannot upgrade it to get security
fixes or to be able to load data dumped from another 10.3 database.

Given that Ubuntu-18.04 will be out soon I guess this is pretty much moot
except for a few unfortunates like me who absolutely need 10.3 but have no
option to upgrade.  I guess the lesson is that running the Pgdg versions
of Postgresql on any but the LTS versions of Ubuntu is pretty risky.
Live and learn.  Maybe this will help someone else.




Re: postgresql-10.3 on unbuntu-17.10 - how??

2018-03-20 Thread Stuart McGraw

On 03/20/2018 06:42 PM, Stuart McGraw wrote:

On 03/20/2018 05:34 PM, Adrian Klaver wrote:

On 03/20/2018 02:25 PM, Stuart McGraw wrote:

On 03/20/2018 02:19 PM, Christoph Berg wrote:

[...]

   # apt-get dist-upgrade
   The following packages will be REMOVED:
 pgadmin3
   The following packages will be upgraded:
 pgadmin3-data postgresql postgresql-client postgresql-client-common
 postgresql-common postgresql-contrib postgresql-doc postgresql-doc-10
 postgresql-server-dev-10
   9 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.

[...]

Likely I am missing something obvious due to my newness with Ubuntu,
but isn't the above supposed to work?  10.3 is a minor upgrade, yes?


Yes it is a minor upgrade.

What does pg_lsclusters show?


# pg_lsclusters
Ver Cluster Port Status Owner    Data directory  Log file
10  main    5432 online postgres /var/lib/postgresql/10/main 
/var/log/postgresql/postgresql-10-main.log


An additional bit of information.  Picking one of the installed packages
to look at:

~# apt-cache  policy postgresql-client
postgresql-client:
  Installed: 10+190.pgdg18.04+1
  Candidate: 10+190.pgdg18.04+1
  Version table:
 *** 10+190.pgdg18.04+1 500
500 http://apt.postgresql.org/pub/repos/apt bionic-pgdg/main amd64 
Packages
500 http://apt.postgresql.org/pub/repos/apt bionic-pgdg/main i386 
Packages
100 /var/lib/dpkg/status
 9.6+184ubuntu1.1 500
500 cdrom://Ubuntu-Server 17.10 _Artful Aardvark_ - Release amd64 
(20180108.1) artful/main amd64 Packages
500 http://us.archive.ubuntu.com/ubuntu artful-security/main amd64 
Packages
500 http://us.archive.ubuntu.com/ubuntu artful-security/main i386 
Packages
500 http://us.archive.ubuntu.com/ubuntu artful-updates/main amd64 
Packages
500 http://us.archive.ubuntu.com/ubuntu artful-updates/main i386 
Packages
 9.6+184ubuntu1 500
500 http://us.archive.ubuntu.com/ubuntu artful/main amd64 Packages
500 http://us.archive.ubuntu.com/ubuntu artful/main i386 Packages

I *think* the above is saying that the package was (or should be?) installed
from the pgdg repository.  So why isn't is getting the 10.3 versions?
(Thanks for the help so far!)



Re: postgresql-10.3 on unbuntu-17.10 - how??

2018-03-20 Thread Stuart McGraw

On 03/20/2018 05:34 PM, Adrian Klaver wrote:

On 03/20/2018 02:25 PM, Stuart McGraw wrote:

On 03/20/2018 02:19 PM, Christoph Berg wrote:
 > Re: Adrian Klaver 2018-03-20 
<4c40e7c5-efa7-00d7-b891-acc9c1ec7...@aklaver.com>
 >>> However I need to upgrade to 10.3 and the normal "apt update;
 >>> apt upgrade" does not offer 10.3.  Also, doing a fresh install
 >>> still installs 10.1.

>> [...]

Thank you Christoph and Adrian.
I changed the apt source to
   deb http://apt.postgresql.org/pub/repos/apt/ bionic-pgdg main

Then (transcripts edited for brevity)...
   # apt-get dist-update


I am going  to say that was apt-get update. More below.


   ...
   # apt-get dist-upgrade
   The following packages will be REMOVED:
 pgadmin3
   The following packages will be upgraded:
 pgadmin3-data postgresql postgresql-client postgresql-client-common
 postgresql-common postgresql-contrib postgresql-doc postgresql-doc-10
 postgresql-server-dev-10
   9 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.

after completing the update and rebooting:

   $ psql --version
   psql (PostgreSQL) 10.1
   $ psql -c 'select version()'
   PostgreSQL 10.1 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
6.3.0-12ubuntu2) 6.3.0 20170406, 64-bit


I suspect the above is coming from the Ubuntu repo, not the PGDG one.


I had also tried 'apt-get upgrade' but that looked less promising
  # apt-get upgrade
  The following packages have been kept back:
libdbd-pg-perl libpq-dev libpq5 pgadmin3 pgadmin3-data postgresql-10
postgresql-client-10
  The following packages will be upgraded:
postgresql postgresql-client postgresql-client-common postgresql-common
postgresql-contrib postgresql-doc postgresql-doc-10 postgresql-server-dev-10
  8 upgraded, 0 newly installed, 0 to remove and 10 not upgraded.
The results turned out the same: postgresql-10.1, not 10.3.

I took this to suggest using dist-upgrade:

  https://wiki.postgresql.org/wiki/Apt/FAQ#How_do_I_dist-upgrade.3F

and that it should "just work".


As an example from an Ubuntu machine that is using the PGDG repo:

psql --version
psql (PostgreSQL) 10.3 (Ubuntu 10.3-1.pgdg16.04+1)
postgres=# select version();
  PostgreSQL 10.3 (Ubuntu 10.3-1.pgdg16.04+1) on x86_64-pc-linux-gnu, compiled 
by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609, 64-bit


Likely I am missing something obvious due to my newness with Ubuntu,
but isn't the above supposed to work?  10.3 is a minor upgrade, yes?


Yes it is a minor upgrade.

What does pg_lsclusters show?


# pg_lsclusters
Ver Cluster Port Status OwnerData directory  Log file
10  main5432 online postgres /var/lib/postgresql/10/main 
/var/log/postgresql/postgresql-10-main.log



Re: postgresql-10.3 on unbuntu-17.10 - how??

2018-03-20 Thread Adrian Klaver

On 03/20/2018 02:25 PM, Stuart McGraw wrote:

On 03/20/2018 02:19 PM, Christoph Berg wrote:
 > Re: Adrian Klaver 2018-03-20 
<4c40e7c5-efa7-00d7-b891-acc9c1ec7...@aklaver.com>

 >>> However I need to upgrade to 10.3 and the normal "apt update;
 >>> apt upgrade" does not offer 10.3.  Also, doing a fresh install
 >>> still installs 10.1.
 >>
 >> https://wiki.postgresql.org/wiki/Apt
 >> "2018-01-17: Ubuntu zesty (17.04) is unsupported now, Ubuntu removed 
it from

 >> their mirrors "
 >>
 >>> Is Pgdg 10.3 even available for ubuntu 17.10?  How the heck does
 >>> one upgrade to it?
 >>
 >> 18.04 LTS (Bionic Beaver)?:
 >>
 >> http://apt.postgresql.org/pub/repos/apt/dists/bionic-pgdg/
 >
 > Ack. We skipped 17.10 and went straight to supporting the upcoming
 > 18.04 (you can already install it). Sorry, there's only 24h a day :(
 >
 > Christoph

Thank you Christoph and Adrian.
I changed the apt source to
   deb http://apt.postgresql.org/pub/repos/apt/ bionic-pgdg main

Then (transcripts edited for brevity)...
   # apt-get dist-update


I am going  to say that was apt-get update. More below.


   ...
   # apt-get dist-upgrade
   The following packages will be REMOVED:
     pgadmin3
   The following packages will be upgraded:
     pgadmin3-data postgresql postgresql-client postgresql-client-common
     postgresql-common postgresql-contrib postgresql-doc postgresql-doc-10
     postgresql-server-dev-10
   9 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.

after completing the update and rebooting:

   $ psql --version
   psql (PostgreSQL) 10.1
   $ psql -c 'select version()'
   PostgreSQL 10.1 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
6.3.0-12ubuntu2) 6.3.0 20170406, 64-bit


I suspect the above is coming from the Ubuntu repo, not the PGDG one. As 
an example from an Ubuntu machine that is using the PGDG repo:


psql --version 



psql (PostgreSQL) 10.3 (Ubuntu 10.3-1.pgdg16.04+1) 






postgres=# select version();

version 

 

 PostgreSQL 10.3 (Ubuntu 10.3-1.pgdg16.04+1) on x86_64-pc-linux-gnu, 
compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609, 64-bit 

(1 row) 






Likely I am missing something obvious due to my newness with Ubuntu,
but isn't the above supposed to work?  10.3 is a minor upgrade, yes?


Yes it is a minor upgrade.

What does pg_lsclusters show?


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: postgresql-10.3 on unbuntu-17.10 - how??

2018-03-20 Thread Stuart McGraw

On 03/20/2018 02:19 PM, Christoph Berg wrote:
> Re: Adrian Klaver 2018-03-20 
<4c40e7c5-efa7-00d7-b891-acc9c1ec7...@aklaver.com>
>>> However I need to upgrade to 10.3 and the normal "apt update;
>>> apt upgrade" does not offer 10.3.  Also, doing a fresh install
>>> still installs 10.1.
>>
>> https://wiki.postgresql.org/wiki/Apt
>> "2018-01-17: Ubuntu zesty (17.04) is unsupported now, Ubuntu removed it from
>> their mirrors "
>>
>>> Is Pgdg 10.3 even available for ubuntu 17.10?  How the heck does
>>> one upgrade to it?
>>
>> 18.04 LTS (Bionic Beaver)?:
>>
>> http://apt.postgresql.org/pub/repos/apt/dists/bionic-pgdg/
>
> Ack. We skipped 17.10 and went straight to supporting the upcoming
> 18.04 (you can already install it). Sorry, there's only 24h a day :(
>
> Christoph

Thank you Christoph and Adrian.
I changed the apt source to
  deb http://apt.postgresql.org/pub/repos/apt/ bionic-pgdg main

Then (transcripts edited for brevity)...
  # apt-get dist-update
  ...
  # apt-get dist-upgrade
  The following packages will be REMOVED:
pgadmin3
  The following packages will be upgraded:
pgadmin3-data postgresql postgresql-client postgresql-client-common
postgresql-common postgresql-contrib postgresql-doc postgresql-doc-10
postgresql-server-dev-10
  9 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.

after completing the update and rebooting:

  $ psql --version
  psql (PostgreSQL) 10.1
  $ psql -c 'select version()'
  PostgreSQL 10.1 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
6.3.0-12ubuntu2) 6.3.0 20170406, 64-bit

Likely I am missing something obvious due to my newness with Ubuntu,
but isn't the above supposed to work?  10.3 is a minor upgrade, yes?



Re: postgresql-10.3 on unbuntu-17.10 - how??

2018-03-20 Thread Christoph Berg
Re: Adrian Klaver 2018-03-20 <4c40e7c5-efa7-00d7-b891-acc9c1ec7...@aklaver.com>
> > However I need to upgrade to 10.3 and the normal "apt update;
> > apt upgrade" does not offer 10.3.  Also, doing a fresh install
> > still installs 10.1.
> 
> https://wiki.postgresql.org/wiki/Apt
> "2018-01-17: Ubuntu zesty (17.04) is unsupported now, Ubuntu removed it from
> their mirrors "
> 
> > 
> > Is Pgdg 10.3 even available for ubuntu 17.10?  How the heck does
> > one upgrade to it?
> 
> 18.04 LTS (Bionic Beaver)?:
> 
> http://apt.postgresql.org/pub/repos/apt/dists/bionic-pgdg/

Ack. We skipped 17.10 and went straight to supporting the upcoming
18.04 (you can already install it). Sorry, there's only 24h a day :(

Christoph



Re: FDW Foreign Table Access: strange LOG message

2018-03-20 Thread Tom Lane
Adrian Klaver  writes:
> On 03/20/2018 11:52 AM, Albrecht Dreß wrote:
>> I use Postgres 10.3 on a Debian Stretch system with foreign tables, and 
>> noticed strange LOG messages when accessing them.
>> [time stamp/pid] user@my_db LOG:  could not receive data from client: 
>> Connection reset by peer

> My suspicion is it has to do with this:
> postgres_fdw establishes a connection to a foreign server during the 
> first query that uses a foreign table associated with the foreign 
> server. This connection is kept and re-used for subsequent queries in 
> the same session.

Perhaps.  It's not entirely clear if these complaints are about the
original user session or the sub-session opened by postgres_fdw.
(Albrecht, if you're not sure either, enabling log_connections and
log_disconnections might make it clearer.)

I don't see any such log messages when testing postgres_fdw here,
which is odd; why are my results different?

If these are about the FDW connections, maybe the answer is that
postgres_fdw ought to establish a backend-exit callback in which
it can shut down its connections gracefully.  If it's doing that
now, I sure don't see where.

regards, tom lane



Re: FATAL: semctl(15073290, 4, SETVAL, 0) failed: Invalid argument

2018-03-20 Thread Adrian Klaver

On 03/20/2018 11:57 AM, JotaComm wrote:

​Hello,

Today I found this message in my Postgres log:

FATAL:  semctl(15073290, 4, SETVAL, 0) failed: Invalid argument
user=,db=,app=,host= LOG:  server process (PID 30741) exited with exit 
code 1

user=,db=,app=,host= LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back 
the current transaction and exit, because another server process exited 
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and 
repeat your command.


Postgres: PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc 
(Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit


Linux: Linux INVST-APP-01A 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 
14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux


FATAL:  semctl(15073290, 4, SETVAL, 0) failed: Invalid argument
user=,db=,app=,host= LOG:  server process (PID 30741) exited with exit 
code 1

user=,db=,app=,host= LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back 
the current transaction and exit, because another server process exited 
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and 
repeat your command.


Any suggestions to discovery about this behavior?


?:
https://www.postgresql.org/docs/10/static/kernel-resources.html
"
18.4.2. systemd RemoveIPC

If systemd is in use, some care must be taken that IPC resources (shared 
memory and semaphores) are not prematurely removed by the operating 
system. This is especially of concern when installing PostgreSQL from 
source. Users of distribution packages of PostgreSQL are less likely to 
be affected, as the postgres user is then normally created as a system user.


The setting RemoveIPC in logind.conf controls whether IPC objects are 
removed when a user fully logs out. System users are exempt. This 
setting defaults to on in stock systemd, but some operating system 
distributions default it to off.


A typical observed effect when this setting is on is that the semaphore 
objects used by a PostgreSQL server are removed at apparently random 
times, leading to the server crashing with log messages like


LOG: semctl(1234567890, 0, IPC_RMID, ...) failed: Invalid argument

Different types of IPC objects (shared memory vs. semaphores, System V 
vs. POSIX) are treated slightly differently by systemd, so one might 
observe that some IPC resources are not removed in the same way as 
others. But it is not advisable to rely on these subtle differences.


A “user logging out” might happen as part of a maintenance job or 
manually when an administrator logs in as the postgres user or something 
similar, so it is hard to prevent in general.


What is a “system user” is determined at systemd compile time from the 
SYS_UID_MAX setting in /etc/login.defs.


Packaging and deployment scripts should be careful to create the 
postgres user as a system user by using useradd -r, adduser --system, or 
equivalent.


Alternatively, if the user account was created incorrectly or cannot be 
changed, it is recommended to set


RemoveIPC=no

in /etc/systemd/logind.conf or another appropriate configuration file.
Caution

At least one of these two things has to be ensured, or the PostgreSQL 
server will be very unreliable.

"



Thanks a lot.

Best regards

--
JotaComm
http://jotacomm.wordpress.com



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: You might be able to move the set-returning function into a LATERAL FROM item.

2018-03-20 Thread Alexander Farber
Thank you -

On Tue, Mar 20, 2018 at 3:27 PM, Tom Lane  wrote:
>
> I think you could push the conditionality into a plpgsql function,
> something like (untested)
>
> create function jsonb_elements_if_array(j jsonb) returns setof jsonb as $$
> begin
>   if jsonb_typeof(j) = 'array' then
> return query select jsonb_array_elements(j);
>   end if;
> end$$
> strict immutable language plpgsql;
>
> Note that this gives *no* elements, rather than a single NULL value,
> if the input isn't an array --- but that seems to me to make more sense
> than your existing code anyhow.  If you disagree, add "else return next
> null::jsonb".
>

I think I will just introduce a separate column (until now I was trying to
squeeze 2 different kinds of data - JSON array of objects and a string -
into the one column)... I believe Adrian had suggested it before :-)


Re: FDW Foreign Table Access: strange LOG message

2018-03-20 Thread Adrian Klaver

On 03/20/2018 11:52 AM, Albrecht Dreß wrote:

Hi all,

I use Postgres 10.3 on a Debian Stretch system with foreign tables, and 
noticed strange LOG messages when accessing them.


The data base setup is basically

---8<-- 


CREATE EXTENSION IF NOT EXISTS postgres_fdw WITH SCHEMA public;
CREATE SERVER ext_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (
     dbname 'ext_db', host 'localhost', updatable 'false');
CREATE FOREIGN TABLE public.ext_table (
     […]
) SERVER ext_server OPTIONS (schema_name 'public', table_name 
'some_table', updatable 'false');
---8<-- 



Now I use the following trivial Python2 (psycopg2) code to access the 
tables:


---8<-- 


import psycopg2
conn = psycopg2.connect(dbname='my_db')
with conn.cursor() as csr:
    csr.execute("SELECT * FROM […] LIMIT 1")
    csr.fetchone()
conn.close()
---8<-- 



When I access a “local” table of my_db in the SELECT statement, there is 
no LOG message.  However, reading from ext_table works just fine, but 
the conn.close() statement above triggers the log message


---8<-- 

[time stamp/pid] user@my_db LOG:  could not receive data from client: 
Connection reset by peer
---8<-- 



Did I miss something in the setup here?  Or is there any other way to 
get rid of the message (which fills ~95% of my logs)?


My suspicion is it has to do with this:

https://www.postgresql.org/docs/10/static/postgres-fdw.html#id-1.11.7.43.10
"
F.34.2. Connection Management

postgres_fdw establishes a connection to a foreign server during the 
first query that uses a foreign table associated with the foreign 
server. This connection is kept and re-used for subsequent queries in 
the same session. However, if multiple user identities (user mappings) 
are used to access the foreign server, a connection is established for 
each user mapping.

"

Guessing that conn.close() is not closing the internal FDW connection 
properly. Not sure how to fix, I will ponder some though.




Thanks in advance,
Albrecht.



--
Adrian Klaver
adrian.kla...@aklaver.com



FATAL: semctl(15073290, 4, SETVAL, 0) failed: Invalid argument

2018-03-20 Thread JotaComm
​Hello,

Today I found this message in my Postgres log:

FATAL:  semctl(15073290, 4, SETVAL, 0) failed: Invalid argument
user=,db=,app=,host= LOG:  server process (PID 30741) exited with exit code
1
user=,db=,app=,host= LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and
repeat your command.

Postgres: PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit

Linux: Linux INVST-APP-01A 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18
14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

FATAL:  semctl(15073290, 4, SETVAL, 0) failed: Invalid argument
user=,db=,app=,host= LOG:  server process (PID 30741) exited with exit code
1
user=,db=,app=,host= LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and
repeat your command.

Any suggestions to discovery about this behavior?

Thanks a lot.

Best regards

-- 
JotaComm
http://jotacomm.wordpress.com


FDW Foreign Table Access: strange LOG message

2018-03-20 Thread Albrecht Dreß

Hi all,

I use Postgres 10.3 on a Debian Stretch system with foreign tables, and noticed 
strange LOG messages when accessing them.

The data base setup is basically

---8<--
CREATE EXTENSION IF NOT EXISTS postgres_fdw WITH SCHEMA public;
CREATE SERVER ext_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (
dbname 'ext_db', host 'localhost', updatable 'false');
CREATE FOREIGN TABLE public.ext_table (
[…]
) SERVER ext_server OPTIONS (schema_name 'public', table_name 'some_table', 
updatable 'false');
---8<--

Now I use the following trivial Python2 (psycopg2) code to access the tables:

---8<--
import psycopg2
conn = psycopg2.connect(dbname='my_db')
with conn.cursor() as csr:
   csr.execute("SELECT * FROM […] LIMIT 1")
   csr.fetchone()
conn.close()
---8<--

When I access a “local” table of my_db in the SELECT statement, there is no LOG 
message.  However, reading from ext_table works just fine, but the conn.close() 
statement above triggers the log message

---8<--
[time stamp/pid] user@my_db LOG:  could not receive data from client: 
Connection reset by peer
---8<--

Did I miss something in the setup here?  Or is there any other way to get rid 
of the message (which fills ~95% of my logs)?

Thanks in advance,
Albrecht.

pgp2Y4xlIKZ3I.pgp
Description: PGP signature


Re: found xmin from before relfrozenxid on pg_catalog.pg_authid

2018-03-20 Thread Jeremy Finzel
On Mon, Mar 19, 2018 at 3:55 PM, Jeremy Finzel  wrote:

>
>
> On Mon, Mar 19, 2018 at 3:53 PM, Peter Geoghegan  wrote:
>
>> On Mon, Mar 19, 2018 at 1:01 PM, Jeremy Finzel  wrote:
>> > SELECT heap_page_items(get_raw_page('pg_authid', 7));
>>
>> Can you post this?
>>
>> SELECT * FROM page_header(get_raw_page('pg_authid', 7));
>>
>> --
>> Peter Geoghegan
>>
>
> @Peter :
>
> staging=# SELECT * FROM page_header(get_raw_page('pg_authid', 7));
>   lsn   | checksum | flags | lower | upper | special | pagesize |
> version | prune_xid
> +--+---+---+---+
> -+--+-+---
>  262B4/10FDC478 |0 | 1 |   304 |  2224 |8192 | 8192 |
>  4 | 0
> (1 row)
>
> @Andres :
>
> This is from snapshot (on 9.5.12, but we didn't have the error either on a
> 9.5.11 snap):
>
>
>  heap_page_items
> 
> 
> -
>  (1,0,0,0,)
>  (2,0,0,0,)
>  (3,0,0,0,)
>  (4,0,0,0,)
>  (5,0,0,0,)
>  (6,8080,1,108,3137434815 <(313)%20743-4815>,0,0,"(7,6)",11,10505,32,
> 11101000
> 01001001,2166427518)
>  (7,7936,1,144,3137434816 <(313)%20743-4816>,0,0,"(7,7)",11,10507,32,
> 11000111011101110101
> 10111011,3504005358)
>  (8,0,0,0,)
>  (9,0,0,0,)
>  (10,7792,1,144,3137434817 <(313)%20743-4817>,0,0,"(7,10)",11,10507,32,
> 11000101011000110111
> 01101000,401353834)
>  (11,7680,1,108,3137434818 <(313)%20743-4818>,0,0,"(7,11)",11,10505,32,
> 11100011010010010001
> 0111,2248708806)
>  (12,0,0,0,)
>  (13,0,0,0,)
>  (14,7568,1,108,3137434819 <(313)%20743-4819>,0,0,"(7,14)",11,10505,32,
> 1111000011011011
> 100010100101,2770187214)
>  (15,7456,1,108,3137434820 <(313)%20743-4820>,0,0,"(7,15)",11,10505,32,
> 10000001010001010011
> 11001011,2235343503)
>  (16,0,0,0,)
>  (17,0,0,0,)
>  (18,0,0,0,)
>  (19,0,0,0,)
>  (20,0,0,0,)
>  (21,0,0,0,)
>  (22,0,0,0,)
>  (23,0,0,0,)
>  (24,0,0,0,)
>  (25,0,0,0,)
>  (26,0,0,0,)
>  (27,0,0,0,)
>  (28,0,0,0,)
>  (29,0,0,0,)
>  (30,0,0,0,)
>  (31,0,0,0,)
>  (32,0,0,0,)
>  (33,0,0,0,)
>  (34,0,0,0,)
>  (35,0,0,0,)
>  (36,0,0,0,)
>  (37,0,0,0,)
>  (38,0,0,0,)
>  (39,0,0,0,)
>  (40,0,0,0,)
>  (41,0,0,0,)
>  (42,0,0,0,)
>  (43,0,0,0,)
>  (44,0,0,0,)
>  (45,0,0,0,)
>  (46,0,0,0,)
>  (47,0,0,0,)
>  (48,0,0,0,)
>  (49,0,0,0,)
>  (50,0,0,0,)
>  (51,0,0,0,)
>  (52,7344,1,108,3137434821 <(313)%20743-4821>,0,0,"(7,52)",11,10505,32,
> 10001101101110001010
> 01010101,2191859675)
>  (53,7232,1,108,3137434822 <(313)%20743-4822>,0,0,"(7,53)",11,10505,32,
> 1110101101000110
> 011011100100,661027542)
>  (54,0,0,0,)
>  (55,0,0,0,)
>  (56,0,0,0,)
>  (57,0,0,0,)
>  (58,0,0,0,)
>  (59,0,0,0,)
>  (60,0,0,0,)
>  (61,7120,1,108,3137434823 <(313)%20743-4823>,0,0,"(7,61)",11,10505,32,
> 1001011101011101
> 010111010100,732568296)
>  (62,6976,1,144,3137434824 <(313)%20743-4824>,0,0,"(7,62)",11,10507,32,
> 11001010011001001010
> 11010100,674571301)
>  (63,6864,1,108,3137434825 <(313)%20743-4825>,0,0,"(7,63)",11,10505,32,
> 10001001101101011100
> 001001010001,2319695577)
>  (64,6720,1,144,3137434826 <(313)%20743-4826>,0,0,"(7,64)",11,10507,32,
> 1100011001100011
> 100100101000,345892418)
>  (65,6608,1,108,3137434827 <(313)%20743-4827>,0,0,"(7,65)",11,10505,32,
> 1101010100010010
> 101011001010,1398049410)
>  (66,6496,1,108,3137434828 <(313)%20743-4828>,0,0,"(7,66)",11,10505,32,
> 1010100000110101
> 001000101010,1414188820)
>  (67,6384,1,108,3137434829 <(313)%20743-4829>,0,0,"(7,67)",11,10505,32,
> 1011010101110011
> 001110101001,2513301164)
>  (68,0,0,0,)
>  (69,0,0,0,)
>  (70,0,0,0,)
> (70 rows)
>
>
> Thanks,
> Jeremy
>

Any suggestions as to what I can do from this point?  Is it feasible that a
server restart would fix this, or are you saying it would just mask the
problem?  I can't reproduce it on a snapshot which is quite odd.

FWIW, maybe th

Re: postgresql-10.3 on unbuntu-17.10 - how??

2018-03-20 Thread Adrian Klaver

On 03/20/2018 08:23 AM, Stuart McGraw wrote:

I recently installed Ubuntu-17.10 and then discovered that
Postgresql from the Pgdg repos is only supported on Ubuntu LTS
releases (eg 16.04).  However info on the internet said pg-10
could be installed from Pgdg zesty repo, which with some package
version conflicts, I was able to do so and have a functional
pg-10.1 install.

However I need to upgrade to 10.3 and the normal "apt update;
apt upgrade" does not offer 10.3.  Also, doing a fresh install
still installs 10.1.


https://wiki.postgresql.org/wiki/Apt
"2018-01-17: Ubuntu zesty (17.04) is unsupported now, Ubuntu removed it 
from their mirrors "




Is Pgdg 10.3 even available for ubuntu 17.10?  How the heck does
one upgrade to it?


18.04 LTS (Bionic Beaver)?:

http://apt.postgresql.org/pub/repos/apt/dists/bionic-pgdg/








--
Adrian Klaver
adrian.kla...@aklaver.com



Re: SELECT .. FOR UPDATE: find out who locked a row

2018-03-20 Thread Enrico Thierbach




--
me at github: https://github.com/radiospiel
me at linked.in: https://www.linkedin.com/in/radiospiel

​You probably considered this but the queuing mechanism I use 
doesn't hold
locks on records during processing.  Workers claim tasks by locking 
them,

setting a claimed flag of some sort, the releasing the lock (including
worker identity if desired) - repeating the general procedure once
completed.

My volume is such that the bloat the extra update causes is not 
meaningful

and is easily handled by (auto-)vacuum​.

David J.


Hi David,

well, I though about it and then put it to rest initially, since I liked 
the idea that with a running job kept “inside” the transaction I 
would never have zombie entries there: if somehow the network connection 
gets lost for the client machines the database would just rollback the 
transaction, the job would revert to its “ready-to-run” state, and 
the next worker would pick it up.


However, I will probably reconsider this, because it has quite some 
advantages; setting a “processing” state, let alone keeping a worker 
identitiy next to the job seems much more straightforward.


So how do you solve the “zombie” situation?

Best,
Enrico




Re: Restore - disable triggers - when they fired?

2018-03-20 Thread Adrian Klaver

On 03/20/2018 07:56 AM, Durumdara wrote:

Dear Adrian!


2018-03-20 15:47 GMT+01:00 Adrian Klaver >:




When it would be useful?


https://www.postgresql.org/docs/10/static/app-pgrestore.html

"--disable-triggers

     This option is relevant only when performing a data-only
restore. It instructs pg_restore to execute commands to temporarily
disable triggers on the target tables while the data is reloaded.
Use this if you have referential integrity checks or other triggers
on the tables that you do not want to invoke during data reload.

     Presently, the commands emitted for --disable-triggers must be
done as superuser. So you should also specify a superuser name with
-S or, preferably, run pg_restore as a PostgreSQL superuser.




Firstly I supposed that data copy somehow could start the
triggers - but how?

Which triggers? Or how they fired with this order?



I have read it, but I don't understand it.

Do you have a good example?


create table disable_trigger_test(id int PRIMARY KEY, fld_1 text);

insert into disable_trigger_test values (1, 'dog'), (2, 'cat');


test=> select * from disable_trigger_test ;
 id | fld_1
+---
  1 | dog
  2 | cat

pg_dump --disable-triggers -d test -U aklaver -t disable_trigger_test -a 
-f disable_trigger_test_data.sql



CREATE OR REPLACE FUNCTION public.trigger_test()
 RETURNS trigger
 LANGUAGE plpgsql
AS $function$
begin
new.fld_1 := new.fld_1 || 'test';
RETURN NEW;
end;
$function$;

CREATE TRIGGER tr_test  
  BEFORE INSERT
  ON disable_trigger_test
  FOR EACH ROW
  EXECUTE PROCEDURE public.trigger_test();



truncate disable_trigger_test ;

#Note I do this as a superuser.
psql -d test -U postgres -f disable_trigger_test_data.sql

test=> select * from disable_trigger_test ;
 id | fld_1
+---
  1 | dog
  2 | cat

test=> insert into disable_trigger_test values (3, 'fish');
INSERT 0 1
test=> select * from disable_trigger_test ;
 id |  fld_1
+--
  1 | dog
  2 | cat
  3 | fishtest
(3 rows)




Thanks!
dd



--
Adrian Klaver
adrian.kla...@aklaver.com



postgresql-10.3 on unbuntu-17.10 - how??

2018-03-20 Thread Stuart McGraw

I recently installed Ubuntu-17.10 and then discovered that
Postgresql from the Pgdg repos is only supported on Ubuntu LTS
releases (eg 16.04).  However info on the internet said pg-10
could be installed from Pgdg zesty repo, which with some package
version conflicts, I was able to do so and have a functional
pg-10.1 install.

However I need to upgrade to 10.3 and the normal "apt update;
apt upgrade" does not offer 10.3.  Also, doing a fresh install
still installs 10.1.

Is Pgdg 10.3 even available for ubuntu 17.10?  How the heck does
one upgrade to it?




Re: Restore - disable triggers - when they fired?

2018-03-20 Thread Durumdara
Dear Adrian!


2018-03-20 15:47 GMT+01:00 Adrian Klaver :

>
>>
>> When it would be useful?
>>
>
> https://www.postgresql.org/docs/10/static/app-pgrestore.html
> "--disable-triggers
>
> This option is relevant only when performing a data-only restore. It
> instructs pg_restore to execute commands to temporarily disable triggers on
> the target tables while the data is reloaded. Use this if you have
> referential integrity checks or other triggers on the tables that you do
> not want to invoke during data reload.
>
> Presently, the commands emitted for --disable-triggers must be done as
> superuser. So you should also specify a superuser name with -S or,
> preferably, run pg_restore as a PostgreSQL superuser.
>
>
>
>
>> Firstly I supposed that data copy somehow could start the triggers - but
>> how?
>>
>> Which triggers? Or how they fired with this order?
>>
>

I have read it, but I don't understand it.

Do you have a good example?

Thanks!

dd


Re: Restore - disable triggers - when they fired?

2018-03-20 Thread Adrian Klaver

On 03/20/2018 07:31 AM, Durumdara wrote:

Dear Members!

I saw in PGAdmin 3/4 that pg_restore have an option "disable triggers".

Because we need to move some databases in the near future I have to know 
about the meaning of this option.


I wrote a table with an BEFORE UPDATE trigger:

create table tr_test
(
id integer not null primary key,
value1 varchar(100),
value2 varchar(100)
);

insert into tr_test values(1, 'a', 'a');
insert into tr_test values(2, 'b', 'b');

CREATE OR REPLACE FUNCTION tfbu_tr_test()
   RETURNS trigger AS
$BODY$
begin
     new.value2 = cast(current_timestamp as varchar(30));
     RETURN NEW;
end;
$BODY$
   LANGUAGE plpgsql VOLATILE
   COST 100;


  CREATE TRIGGER tbi_tr_test
   BEFORE INSERT
   ON tr_test
   FOR EACH ROW
   EXECUTE PROCEDURE public.tfbu_tr_test();

insert into tr_test values(3, 'c', 'c');
select * from tr_test;


and I tried to dump and restore in PGAdmin IV.

The dumped data is same as I read after restore.

The pg_restore log shows me that triggers and indexes created after data 
copy.


At this point I confused in "disable triggers" option.

When it would be useful?


https://www.postgresql.org/docs/10/static/app-pgrestore.html
"--disable-triggers

This option is relevant only when performing a data-only restore. 
It instructs pg_restore to execute commands to temporarily disable 
triggers on the target tables while the data is reloaded. Use this if 
you have referential integrity checks or other triggers on the tables 
that you do not want to invoke during data reload.


Presently, the commands emitted for --disable-triggers must be done 
as superuser. So you should also specify a superuser name with -S or, 
preferably, run pg_restore as a PostgreSQL superuser.





Firstly I supposed that data copy somehow could start the triggers - but 
how?


Which triggers? Or how they fired with this order?

Or they remains as disabled AFTER the backup for next, by hand 
manipulations?


So please light my mind a little!

Thank you!

Regards
dd



--
Adrian Klaver
adrian.kla...@aklaver.com



Restore - disable triggers - when they fired?

2018-03-20 Thread Durumdara
Dear Members!

I saw in PGAdmin 3/4 that pg_restore have an option "disable triggers".

Because we need to move some databases in the near future I have to know
about the meaning of this option.

I wrote a table with an BEFORE UPDATE trigger:

create table tr_test
(
id integer not null primary key,
value1 varchar(100),
value2 varchar(100)
);

insert into tr_test values(1, 'a', 'a');
insert into tr_test values(2, 'b', 'b');

CREATE OR REPLACE FUNCTION tfbu_tr_test()
  RETURNS trigger AS
$BODY$
begin
new.value2 = cast(current_timestamp as varchar(30));
RETURN NEW;
end;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;


 CREATE TRIGGER tbi_tr_test
  BEFORE INSERT
  ON tr_test
  FOR EACH ROW
  EXECUTE PROCEDURE public.tfbu_tr_test();

insert into tr_test values(3, 'c', 'c');
select * from tr_test;


and I tried to dump and restore in PGAdmin IV.

The dumped data is same as I read after restore.

The pg_restore log shows me that triggers and indexes created after data
copy.

At this point I confused in "disable triggers" option.

When it would be useful?

Firstly I supposed that data copy somehow could start the triggers - but
how?

Which triggers? Or how they fired with this order?

Or they remains as disabled AFTER the backup for next, by hand
manipulations?

So please light my mind a little!

Thank you!

Regards
dd


Re: PostgreSQL 9.6 Temporary files

2018-03-20 Thread Jimmy Augustine
2018-03-20 15:00 GMT+01:00 Adrian Klaver :

> On 03/20/2018 03:16 AM, Jimmy Augustine wrote:
>
>> Thanks all for your response,
>>
>> $du -h $MY_DATA/base/$BASE_OID/ returns 162GB but when I execute this
>> query:
>>
>> |SELECT stats.relname AS table, 
>> pg_size_pretty(pg_relation_size(statsio.relid))
>> AS table_size, pg_size_pretty(pg_total_relation_size(statsio.relid) -
>> pg_relation_size(statsio.relid)) AS related_objects_size,
>> pg_size_pretty(pg_total_relation_size(statsio.relid)) AS
>> total_table_size, stats.n_live_tup AS live_rows FROM
>> pg_catalog.pg_statio_user_tables AS statsio JOIN pg_stat_user_tables AS
>> stats USING (relname) WHERE stats.schemaname = current_schemaUNION ALL
>> SELECT 'TOTAL' AS table, pg_size_pretty(sum(pg_relation_size(statsio.relid)))
>> AS table_size, pg_size_pretty(sum(pg_total_relation_size(statsio.relid)
>> - pg_relation_size(statsio.relid))) AS related_objects_size,
>> pg_size_pretty(sum(pg_total_relation_size(statsio.relid))) AS
>> total_table_size, sum(stats.n_live_tup) AS live_rows FROM
>> pg_catalog.pg_statio_user_tables AS statsio JOIN pg_stat_user_tables AS
>> stats USING (relname) WHERE stats.schemaname = current_schemaORDER BY
>> live_rows ASC;
>>
>> |
>>
>> |I obtain 80GB in total_table_size (half of my database), where are
>> missing data at?
>>
>
> First of all you are using pg_statio_user_tables which does not count
> system tables.
>
> Second pretty sure the use of current_schema is limiting the results to
> only one schema in the database.
>

AHHH Thanks you I found missing data they are stored into pg_largeobject.

|
>>
>>
>> 2018-03-19 19:32 GMT+01:00 Adrian Klaver > >:
>>
>> On 03/19/2018 10:27 AM, Jimmy Augustine wrote:
>>
>> I tried this query and my database size is equal to 162GB.
>>
>>
>> Well you can always look in $DATA directly. The database will be
>> under $DATA/base/.
>>
>> You can find the  like this:
>>
>> select oid, datname from  pg_database where datname='';
>>
>>
>> -- Adrian Klaver
>> adrian.kla...@aklaver.com 
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: You might be able to move the set-returning function into a LATERAL FROM item.

2018-03-20 Thread Tom Lane
Alexander Farber  writes:
> I am trying to fetch a history/protocol of a game with:

> SELECT
> CASE WHEN JSONB_TYPEOF(tiles) = 'array' THEN
> JSONB_ARRAY_ELEMENTS(tiles) ELSE NULL END AS x

I think you could push the conditionality into a plpgsql function,
something like (untested)

create function jsonb_elements_if_array(j jsonb) returns setof jsonb as $$
begin
  if jsonb_typeof(j) = 'array' then
return query select jsonb_array_elements(j);
  end if;
end$$
strict immutable language plpgsql;

Note that this gives *no* elements, rather than a single NULL value,
if the input isn't an array --- but that seems to me to make more sense
than your existing code anyhow.  If you disagree, add "else return next
null::jsonb".

regards, tom lane



Re: Foreign Key locking / deadlock issue.

2018-03-20 Thread Adrian Klaver

On 03/20/2018 06:55 AM, HORDER Phil wrote:

Please reply to list also.
Ccing list.

Still not certain what the PK is or what key value refers to?


Well this is just sample SQL to demonstrate the problem.
If each process executes lines of code alternately, a deadlock occurs.

The commit would obviously be required by an application, but one transaction 
would be cancelled by the deadlock exception, and the demonstrator would 
rollback the other.

Phil Horder
Database Mechanic



-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: 20 March 2018 13:51
To: HORDER Phil; pgsql-general@lists.postgresql.org
Subject: Re: Foreign Key locking / deadlock issue.

On 03/20/2018 04:46 AM, HORDER Phil wrote:

Hi,

I'm trying to understand why I'm getting a deadlock issue, and how to
work around it.

At base, I think the problem is:

1.Updates to a parent table are creating row level write locks,

2.updates to a child table set the foreign key value to the parent
table, which are then blocked.

While investigating, I found the row locking documentation, which says
that I can request read locks that don't block.

But my sample code still gets blocked.

https://www.postgresql.org/docs/9.6/static/explicit-locking.html

Can someone explain what I'm missing please?

parent process

---

start transaction;

select * from pl where pl_id in (2,3) for no key update of pl;  (1)

update pl set m_number = '234' where pl_id = 2; (2)

update pl set m_number = '345' where pl_id = 3; (3)

child process

---

start transaction;

select pl_id from pl where pl_id in (2,3) for key share of pl;  (4)

update eln set pl_id = 3 where event_id = 303;  (5)

update eln set pl_id = 2 where event_id = 302;  (6)



I do not see a commit for either transaction.



My Parent process inserts and updates on the PL table, but never
changes the key value.


I am assuming when you say key value you are referring to PRIMARY KEY?

What is the key(PK) column?



My Child process inserts and updates on the ELN table, and can set the
FK reference value to the PL table.

I can understand that the default lock on the PL update will block the
foreign key check from the ELN table.

Why does this example still get blocked?

Is there a way around this without dropping the foreign key?

(And if so.. How can I get this to work in Spring Data / JPA?)

Phil Horder

Database Mechanic

Thales

Land and Air Systems

Horizon House, Throop Road, Templecombe, Somerset, BA8 0DH, UK




--
Adrian Klaver
adrian.kla...@aklaver.com




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: PostgreSQL 9.6 Temporary files

2018-03-20 Thread Adrian Klaver

On 03/20/2018 03:16 AM, Jimmy Augustine wrote:

Thanks all for your response,

$du -h $MY_DATA/base/$BASE_OID/ returns 162GB but when I execute this query:

|SELECT stats.relname AS table, 
pg_size_pretty(pg_relation_size(statsio.relid)) AS table_size, 
pg_size_pretty(pg_total_relation_size(statsio.relid) - 
pg_relation_size(statsio.relid)) AS related_objects_size, 
pg_size_pretty(pg_total_relation_size(statsio.relid)) AS 
total_table_size, stats.n_live_tup AS live_rows FROM 
pg_catalog.pg_statio_user_tables AS statsio JOIN pg_stat_user_tables AS 
stats USING (relname) WHERE stats.schemaname = current_schemaUNION ALL 
SELECT 'TOTAL' AS table, 
pg_size_pretty(sum(pg_relation_size(statsio.relid))) AS table_size, 
pg_size_pretty(sum(pg_total_relation_size(statsio.relid) - 
pg_relation_size(statsio.relid))) AS related_objects_size, 
pg_size_pretty(sum(pg_total_relation_size(statsio.relid))) AS 
total_table_size, sum(stats.n_live_tup) AS live_rows FROM 
pg_catalog.pg_statio_user_tables AS statsio JOIN pg_stat_user_tables AS 
stats USING (relname) WHERE stats.schemaname = current_schemaORDER BY 
live_rows ASC;


|

|I obtain 80GB in total_table_size (half of my database), where are 
missing data at?


First of all you are using pg_statio_user_tables which does not count 
system tables.


Second pretty sure the use of current_schema is limiting the results to 
only one schema in the database.



|


2018-03-19 19:32 GMT+01:00 Adrian Klaver >:


On 03/19/2018 10:27 AM, Jimmy Augustine wrote:

I tried this query and my database size is equal to 162GB.


Well you can always look in $DATA directly. The database will be
under $DATA/base/.

You can find the  like this:

select oid, datname from  pg_database where datname='';


-- 
Adrian Klaver

adrian.kla...@aklaver.com 





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: error 53200 out of memory

2018-03-20 Thread Adrian Klaver

On 03/20/2018 12:08 AM, francis cherat wrote:

Hello,

there is no message in /var/log/messages


How about the Postgres logs?



Regards



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Foreign Key locking / deadlock issue.

2018-03-20 Thread Adrian Klaver

On 03/20/2018 04:46 AM, HORDER Phil wrote:

Hi,

I’m trying to understand why I’m getting a deadlock issue, and how to 
work around it.


At base, I think the problem is:

1.Updates to a parent table are creating row level write locks,

2.updates to a child table set the foreign key value to the parent 
table, which are then blocked.


While investigating, I found the row locking documentation, which says 
that I can request read locks that don’t block.


But my sample code still gets blocked.

https://www.postgresql.org/docs/9.6/static/explicit-locking.html

Can someone explain what I’m missing please?

parent process

---

start transaction;

select * from pl where pl_id in (2,3) for no key update of pl;  (1)

update pl set m_number = '234' where pl_id = 2; (2)

update pl set m_number = '345' where pl_id = 3; (3)

child process

---

start transaction;

select pl_id from pl where pl_id in (2,3) for key share of pl;  (4)

update eln set pl_id = 3 where event_id = 303;  (5)

update eln set pl_id = 2 where event_id = 302;  (6)



I do not see a commit for either transaction.



My Parent process inserts and updates on the PL table, but never changes 
the key value.


I am assuming when you say key value you are referring to PRIMARY KEY?

What is the key(PK) column?



My Child process inserts and updates on the ELN table, and can set the 
FK reference value to the PL table.


I can understand that the default lock on the PL update will block the 
foreign key check from the ELN table.


Why does this example still get blocked?

Is there a way around this without dropping the foreign key?

(And if so…. How can I get this to work in Spring Data / JPA?)

Phil Horder

Database Mechanic

Thales

Land and Air Systems

Horizon House, Throop Road, Templecombe, Somerset, BA8 0DH, UK




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: COPY error when \. char

2018-03-20 Thread Adrian Klaver

On 03/20/2018 04:32 AM, Nicolas Paris wrote:

Hello

I get an error when loading this kind of csv:


test.csv:

"hello ""world"""
"\."
"this
works
"
"this
\.
does
not"


table:

create table test (field text);


sql:

\copy test (field) from 'test.csv' CSV  quote '"' ESCAPE '"';
ERROR:  unterminated CSV quoted field
CONTEXTE : COPY test, line 7: ""this
"

Apparently, having the \.  string in a single line make it break.
Is this normal ?


https://www.postgresql.org/docs/10/static/sql-copy.html
"Because backslash is not a special character in the CSV format, \., the 
end-of-data marker, could also appear as a data value. To avoid any 
misinterpretation, a \. data value appearing as a lone entry on a line 
is automatically quoted on output, and on input, if quoted, is not 
interpreted as the end-of-data marker. If you are loading a file created 
by another application that has a single unquoted column and might have 
a value of \., you might need to quote that value in the input file."




Thanks





--
Adrian Klaver
adrian.kla...@aklaver.com



Foreign Key locking / deadlock issue.

2018-03-20 Thread HORDER Phil
Hi,
I'm trying to understand why I'm getting a deadlock issue, and how to work 
around it.

At base, I think the problem is:

1.   Updates to a parent table are creating row level write locks,

2.   updates to a child table set the foreign key value to the parent 
table, which are then blocked.

While investigating, I found the row locking documentation, which says that I 
can request read locks that don't block.
But my sample code still gets blocked.
https://www.postgresql.org/docs/9.6/static/explicit-locking.html

Can someone explain what I'm missing please?

parent process
---

start transaction;
select * from pl where pl_id in (2,3) for no key update of pl;  (1)
update pl set m_number = '234' where pl_id = 2; (2)
update pl set m_number = '345' where pl_id = 3; (3)

child process
---

start transaction;
select pl_id from pl where pl_id in (2,3) for key share of pl;  (4)
update eln set pl_id = 3 where event_id = 303;  (5)
update eln set pl_id = 2 where event_id = 302;  (6)


My Parent process inserts and updates on the PL table, but never changes the 
key value.
My Child process inserts and updates on the ELN table, and can set the FK 
reference value to the PL table.

I can understand that the default lock on the PL update will block the foreign 
key check from the ELN table.
Why does this example still get blocked?

Is there a way around this without dropping the foreign key?

(And if so How can I get this to work in Spring Data / JPA?)


Phil Horder
Database Mechanic

Thales
Land and Air Systems
Horizon House, Throop Road, Templecombe, Somerset, BA8 0DH, UK



COPY error when \. char

2018-03-20 Thread Nicolas Paris
Hello

I get an error when loading this kind of csv:

> test.csv:
"hello ""world"" "
"\."
"this
works
"
"this
\.
does
not"

> table:
create table test (field text);

> sql:
\copy test (field) from 'test.csv' CSV  quote '"' ESCAPE '"';
ERROR:  unterminated CSV quoted field
CONTEXTE : COPY test, line 7: ""this
"

Apparently, having the \.  string in a single line make it break.
Is this normal ?

Thanks



Re: You might be able to move the set-returning function into a LATERAL FROM item.

2018-03-20 Thread Laurenz Albe
Alexander Farber wrote:
> for a word game in PostgreSQL 10.3 I have a table with jsonb column "tiles".
> 
> The column holds either a JSON array of objects (word tiles played) or a 
> string (of swapped letters).
> 
> I am trying to fetch a history/protocol of a game with:
> 
> 
> CREATE OR REPLACE FUNCTION words_get_moves(
> [...] AS
> $func$
> [...]
> CASE WHEN JSONB_TYPEOF(tiles) = 'array' THEN 
> JSONB_ARRAY_ELEMENTS(tiles) ELSE NULL END AS x
> [...]
> $func$ LANGUAGE sql;
> 
> However calling this stored function gives the error:
> 
> ERROR:  0A000: set-returning functions are not allowed in CASE
> LINE 18: ... CASE WHEN JSONB_TYPEOF(tiles) = 'array' THEN JSONB_ARRA...
>   ^
> HINT:  You might be able to move the set-returning function into a LATERAL 
> FROM item.
> 
> I have read that PostgreSQL 10 handles SRF more strictly, but what does it 
> want me to do here, to add 1 more table to the LEFT JOIN?

The problem is that "jsonb_array_elements" returns several rows, which does not
make sense in this context.  Which of the rows do you want?

If you know that it will always return at most one row, you could use:

 ... THEN (SELECT jae FROM jsonb_array_elements(tiles) jae LIMIT 1)

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: PostgreSQL 9.6 Temporary files

2018-03-20 Thread Jimmy Augustine
Thanks all for your response,

$du -h $MY_DATA/base/$BASE_OID/ returns 162GB but when I execute this query:

SELECT stats.relname
   AS table,
   pg_size_pretty(pg_relation_size(statsio.relid))
   AS table_size,
   pg_size_pretty(pg_total_relation_size(statsio.relid)
   - pg_relation_size(statsio.relid))
   AS related_objects_size,
   pg_size_pretty(pg_total_relation_size(statsio.relid))
   AS total_table_size,
   stats.n_live_tup
   AS live_rows
  FROM pg_catalog.pg_statio_user_tables AS statsio
  JOIN pg_stat_user_tables AS stats
 USING (relname)
 WHERE stats.schemaname = current_schema
 UNION ALLSELECT 'TOTAL'
   AS table,
   pg_size_pretty(sum(pg_relation_size(statsio.relid)))
   AS table_size,
   pg_size_pretty(sum(pg_total_relation_size(statsio.relid)
   - pg_relation_size(statsio.relid)))
   AS related_objects_size,
   pg_size_pretty(sum(pg_total_relation_size(statsio.relid)))
   AS total_table_size,
   sum(stats.n_live_tup)
   AS live_rows
  FROM pg_catalog.pg_statio_user_tables AS statsio
  JOIN pg_stat_user_tables AS stats
 USING (relname)
 WHERE stats.schemaname = current_schema
 ORDER BY live_rows ASC;

I obtain 80GB in total_table_size (half of my database), where are
missing data at?


2018-03-19 19:32 GMT+01:00 Adrian Klaver :

> On 03/19/2018 10:27 AM, Jimmy Augustine wrote:
>
>> I tried this query and my database size is equal to 162GB.
>>
>>
> Well you can always look in $DATA directly. The database will be under
> $DATA/base/.
>
> You can find the  like this:
>
> select oid, datname from  pg_database where datname='';
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


You might be able to move the set-returning function into a LATERAL FROM item.

2018-03-20 Thread Alexander Farber
Good morning,

for a word game in PostgreSQL 10.3 I have a table with jsonb column "tiles".

The column holds either a JSON array of objects (word tiles played) or a
string (of swapped letters).

I am trying to fetch a history/protocol of a game with:


CREATE OR REPLACE FUNCTION words_get_moves(
in_gidinteger
) RETURNS TABLE (
out_action text,
out_letters text,
out_words text
) AS
$func$
WITH cte1 AS (
SELECT
mid,
action,
STRING_AGG(x->>'letter', '') AS tiles
FROM (
SELECT
mid,
action,
CASE WHEN JSONB_TYPEOF(tiles) = 'array' THEN
JSONB_ARRAY_ELEMENTS(tiles) ELSE NULL END AS x
--JSONB_ARRAY_ELEMENTS(tiles) AS x
FROM words_moves
WHERE gid = in_gid
--AND JSONB_TYPEOF(tiles) = 'array'
) AS p
GROUP BY mid, action),
cte2 AS (
SELECT
mid,
STRING_AGG(y, ', ') AS words
FROM (
SELECT
mid,
FORMAT('%s (%s)', word, score) AS y
FROM words_scores
WHERE gid = in_gid
) AS q
GROUP BY mid)
SELECT
action,
tiles,
words
FROM cte1
LEFT JOIN cte2 using (mid)
ORDER BY mid ASC;
$func$ LANGUAGE sql;

However calling this stored function gives the error:

ERROR:  0A000: set-returning functions are not allowed in CASE
LINE 18: ... CASE WHEN JSONB_TYPEOF(tiles) = 'array' THEN JSONB_ARRA...
  ^
HINT:  You might be able to move the set-returning function into a LATERAL
FROM item.

I have read that PostgreSQL 10 handles SRF more strictly, but what does it
want me to do here, to add 1 more table to the LEFT JOIN?

Thank you
Alex


RE: error 53200 out of memory

2018-03-20 Thread francis cherat
Hello,

there is no message in /var/log/messages

Regards

De : francis cherat 
Envoyé : vendredi 16 mars 2018 20:27
À : Adrian Klaver; pgsql-gene...@postgresql.org
Objet : RE: error 53200 out of memory

I don't think so, but i am not in my office. I can't connect to this server.
 I would answer you on monday

Regards

De : Adrian Klaver 
Envoyé : vendredi 16 mars 2018 16:26
À : francis cherat; pgsql-gene...@postgresql.org
Objet : Re: error 53200 out of memory

On 03/16/2018 04:00 AM, francis cherat wrote:
> Hello,
>
> we have got  an error 53200 after sql statement
>
> [5-1] ERROR:  53200: out of memory
> [6-1] DETAIL:  Failed on request of size 1572864.
> [7-1] LOCATION:  AllocSetRealloc, aset.c:973
>
> in jboss logs we have got those errors
>
> org.jboss.logging.jboss-logging - 3.1.2.GA | ERROR: out of memory
>Détail : Failed on request of size 1572864.
> executing failed
> org.hibernate.exception.GenericJDBCException: could not extract ResultSet
>
> cluster configuration
> server_version  | 9.3.16
> effective_cache_size| 12800MB
> shared_buffers  | 384MB
> work_mem| 384MB
>
> Server configuration
> RHEL 6.5
> RAM : 16Go
> 2 CPUs
>
> Thanks for your feedback

Is there anything in the Postgres and/or system logs from the same time
that might shed on this?



--
Adrian Klaver
adrian.kla...@aklaver.com