Re: Returning Values from INSERT ON CONFLICT DO NOTHING

2018-01-01 Thread Igal Sapir
It seems that if I do a simple update it resolves my issue: INSERT INTO users(email, name) VALUES('u...@domain.tld', 'User') ON CONFLICT (email) DO UPDATE SET email = excluded.email -- users.email works too, not sure if makes a difference RETURNING user_id, (xmax::text::int > 0) as

Returning Values from INSERT ON CONFLICT DO NOTHING

2018-01-01 Thread Igal Sapir
Hello, I want to use the "UPSERT" syntax for returning an ID if it exists, or inserting a record and returning the new ID if it does not exist. INSERT INTO users(email, name) VALUES('u...@domain.tld', 'User') ON CONFLICT (email) DO NOTHING RETURNING user_id, (xmax::text::int > 0) as

Re: Use of Port 5433 with Postgresql 9.6

2018-01-01 Thread Paul Jungwirth
On 01/01/2018 10:40 AM, Jeff Janes wrote: The Ubuntu packages use 5433 if you already have something (either a different packaged version, or an unpackaged system) running on 5432. Also on Ubuntu lots of the Postgres utilities will obey an envvar like `PGCLUSTER=9.5/main` to let you choose

Re: Use of Port 5433 with Postgresql 9.6

2018-01-01 Thread Jeff Janes
On Mon, Jan 1, 2018 at 12:42 PM, Graeme wrote: > If the default port for v9.6 is 5433, why does the utility pg_isready > still default to searching for 5432? The Ubuntu packages use 5433 if you already have something (either a different packaged version, or an unpackaged

Re: Use of Port 5433 with Postgresql 9.6

2018-01-01 Thread Andreas Kretschmer
On 1 January 2018 18:42:28 CET, Graeme wrote: >If the default port for v9.6 is 5433, why does the utility pg_isready >still default to searching for 5432? The default port for PostgreSQL is 5432. You can use option -p or --port to specify an other Port. You can also set

Re: Use of Port 5433 with Postgresql 9.6

2018-01-01 Thread Karsten Hilbert
On Mon, Jan 01, 2018 at 05:42:28PM +, Graeme wrote: > If the default port for v9.6 is 5433, It is not. > why does the utility pg_isready still > default to searching for 5432? Which is why. Karsten -- GPG key ID E4071346 @ eu.pool.sks-keyservers.net E167 67FD A291 2BEA 73BD 4537 78B9

Re: Slow system due to ReorderBufferGetTupleBuf?

2018-01-01 Thread Peter Geoghegan
On Mon, Jan 1, 2018 at 8:56 AM, Martin Moore wrote: > Can someone shed some light on this and advise how to prevent it reoccurring? You're using v10, which has these two commits:

Use of Port 5433 with Postgresql 9.6

2018-01-01 Thread Graeme
If the default port for v9.6 is 5433, why does the utility pg_isready still default to searching for 5432? Graeme

Slow system due to ReorderBufferGetTupleBuf?

2018-01-01 Thread Martin Moore
Postgres v10 on Debian stretch I’m suffering from an occasionally very slow system. A few weeks ago someone mentioned using perf. I’ve installed this and caught the system during a slow period. It shows the following as the top cpu users: 9.09% postgres [.]

Re: Selecting a JSON object of arrays from a PostgreSQL table

2018-01-01 Thread Alexander Farber
Hi Ivan, On Mon, Jan 1, 2018 at 3:34 PM, Ivan E. Panchenko < i.panche...@postgrespro.ru> wrote: > > select json_object_agg(gid, y) from > ( > select gid, jsonb_agg(row_to_json(chat)) y > from chat > group by gid > ) x; > > > 01.01.2018 16:41, Alexander Farber

Re: Selecting a JSON object of arrays from a PostgreSQL table

2018-01-01 Thread Ivan E. Panchenko
Hi Alex! Why not: select json_object_agg(gid, y) from     ( select gid, jsonb_agg(row_to_json(chat)) y from chat group by gid     ) x; Regards, Ivan 01.01.2018 16:41, Alexander Farber пишет: Hello and happy new year! I have prepared a simple SQL Fiddle

Selecting a JSON object of arrays from a PostgreSQL table

2018-01-01 Thread Alexander Farber
Hello and happy new year! I have prepared a simple SQL Fiddle demonstrating my problem: http://sqlfiddle.com/#!17/2c9c5/1 In a two-player game I store user chats in a table: CREATE TABLE chat( gid integer,/* game id */ uid integer,/* user id */ created

Re: invalid memory alloc request size 576460752438159360

2018-01-01 Thread Ibrahim Edib Kokdemir
hi Peter, today, we took the dump of database and restored to another empty cluster and run the queries on it for test purposes, no problem at all. All errors are gone.