Re: How can I retrieve double or int data type for libpq

2018-06-12 Thread Laurenz Albe
a wrote:
> I wanna use C library to connect pgsql server and I got 2 questions:

This is all well documented in
https://www.postgresql.org/docs/current/static/libpq.html

> 1, If I can fetch a row's data instead of using PQgetvalue?

PGgetvalue *is* the way to fetch the row's data.

There is no call to fetch *all* columns at once, but that is
not necessary.

> 2, PQgetvalue will return all data as text, I would have to convert them into 
> the
> relative data type, is there a way of getting data by its original format?

You will get the data in binary form if you specify a resultFormat of 1
with PQexecParams.

Note, however, that they will be in the native binary format of the *server*.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: Logging

2018-06-12 Thread Andrew Bartley
Thanks All,

It seems that the related DETAIL message appears if the query is
successful.  On error the DETAIL log line is missing...   This makes
debugging difficult, considering the insert is coming from Dreamfactory via
a rest POST.

I am finding it very difficult navigating my way through the brave new
world of SAS and Blackbox type applications  Maybe 32 years as a
database programmer is just too long.

Thanks

Andrew




On Wed, 13 Jun 2018 at 14:12 Adrian Klaver 
wrote:

> On 06/12/2018 08:25 PM, David G. Johnston wrote:
> > On Tuesday, June 12, 2018, Andrew Bartley  > > wrote:
> >
> >
> >
> > On Wed, 13 Jun 2018 at 12:43 Laurenz Albe  > > wrote:
> >
> >
> > log_min_duration_statement = 0
> >
> > [...]
> >
> >
> > log_min_duration_statement -1
> >
> >
> > You've disabled statement logging altogether.  The zero value you were
> > directed to use is what causes everything to be logged.
>
> Actually no:
>
>
> https://www.postgresql.org/docs/10/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN
>
> log_min_duration_statement (integer)
>
> "... Setting this to zero prints all statement durations. Minus-one (the
> default) disables logging statement durations.  ..."
>
> "
>
> So -1 only affects logging statements relative to duration.
>
> If you have log_statements set then you will still get statements logged
> if you have log_min_duration_statement = -1 :
>
> Note
>
> When using this option together with log_statement, the text of
> statements that are logged because of log_statement will not be repeated
> in the duration log message.
> "
>
> This is how I have my logging setup, log_min_duration_statement = -1
> and log_statements = 'mod' and I see statements in the logs.
>
> >
> > David J.
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Logging

2018-06-12 Thread Adrian Klaver

On 06/12/2018 08:25 PM, David G. Johnston wrote:
On Tuesday, June 12, 2018, Andrew Bartley > wrote:




On Wed, 13 Jun 2018 at 12:43 Laurenz Albe mailto:laurenz.a...@cybertec.at>> wrote:


log_min_duration_statement = 0

[...]


log_min_duration_statement -1


You've disabled statement logging altogether.  The zero value you were 
directed to use is what causes everything to be logged.


Actually no:

https://www.postgresql.org/docs/10/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN

log_min_duration_statement (integer)

"... Setting this to zero prints all statement durations. Minus-one (the 
default) disables logging statement durations.  ..."


"

So -1 only affects logging statements relative to duration.

If you have log_statements set then you will still get statements logged 
if you have log_min_duration_statement = -1 :


Note

When using this option together with log_statement, the text of 
statements that are logged because of log_statement will not be repeated 
in the duration log message.

"

This is how I have my logging setup, log_min_duration_statement = -1
and log_statements = 'mod' and I see statements in the logs.



David J.





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Logging

2018-06-12 Thread Andrew Bartley
Ok, thanks.   will try

On Wed, 13 Jun 2018 at 13:25 David G. Johnston 
wrote:

> On Tuesday, June 12, 2018, Andrew Bartley  wrote:
>
>> On Wed, 13 Jun 2018 at 12:43 Laurenz Albe 
>> wrote:
>>
>
>>> log_min_duration_statement = 0
>>>
>> [...]
>
>>
>> log_min_duration_statement -1
>>
>
> You've disabled statement logging altogether.  The zero value you were
> directed to use is what causes everything to be logged.
>
> David J.
>
>
>


Re: Logging

2018-06-12 Thread David G. Johnston
On Tuesday, June 12, 2018, Andrew Bartley  wrote:

>
>
> On Wed, 13 Jun 2018 at 12:43 Laurenz Albe 
> wrote:
>
>>
>> log_min_duration_statement = 0
>>
> [...]

>
> log_min_duration_statement -1
>

You've disabled statement logging altogether.  The zero value you were
directed to use is what causes everything to be logged.

David J.


How can I retrieve double or int data type for libpq

2018-06-12 Thread a
Hi


I wanna use C library to connect pgsql server and I got 2 questions:


1, If I can fetch a row's data instead of using PQgetvalue?


2, PQgetvalue will return all data as text, I would have to convert them into 
the relative data type, is there a way of getting data by its original format?


Thanks so much!


Shore

Re: Logging

2018-06-12 Thread Andrew Bartley
On Wed, 13 Jun 2018 at 12:43 Laurenz Albe  wrote:

> Andrew Bartley wrote:
> > Can someone please tell me how to log the values being inserted in this
> example..
> >
> > 2018-06-12 22:31:49 UTC-5b2049d4.434ac-STATEMENT:  insert into
> "api_consumers" ("consumer_id", "create_datetime") values ($1, $2).
> >
> > I have tried many different logging options and combinations.
>
> That should automatically be logged as a DETAIL message.
>
> log_min_duration_statement = 0
> log_min_error_statement = log or better
> log_min_messages = log or better
>
> That should do the trick.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com

Thanks for the reply Laurenz,


Current settings

"PostgreSQL 9.6.7, compiled by Visual C++ build 1800, 64-bit"
Azure managed instance


log_min_duration_statement -1
log_min_error_statement DEBUG2
log_min_messages DEBUG2
log_statement ALL
log_error_verbosity VERBOSE

Still no joy.

Thanks Andrew


Re: Logging

2018-06-12 Thread Laurenz Albe
Andrew Bartley wrote:
> Can someone please tell me how to log the values being inserted in this 
> example..
> 
> 2018-06-12 22:31:49 UTC-5b2049d4.434ac-STATEMENT:  insert into 
> "api_consumers" ("consumer_id", "create_datetime") values ($1, $2).
> 
> I have tried many different logging options and combinations. 

That should automatically be logged as a DETAIL message.

log_min_duration_statement = 0
log_min_error_statement = log or better
log_min_messages = log or better

That should do the trick.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: Does pgAgent support chinese, japanese characters?

2018-06-12 Thread a
Thanks a lot for your reply.


I tried to set the host/client encoding, however, when I create new tables 
using Chinese, it reported an encoding error.. also with other queries. 


May I ask encoding of pgAgent? May it reading from a text file? Or I can use 
notepad++ to convert and then copy??




-- Original --
From: "Tom Lane"; 
Date: Tuesday, Jun 12, 2018 9:59 PM
To: "a"<372660...@qq.com>; 
Cc: "pgsql-general"; 
Subject: Re: Does pgAgent support chinese, japanese characters?



"=?ISO-8859-1?B?YQ==?=" <372660...@qq.com> writes:
> Hi I'm using pgAgent to set up daily or monthly tasks. 
> I have tables that with the name of Chinese or Japanese. It runs ok in normal 
> sql script, but report non-recognizable in pgAgent SQL jobs.

First guess is that client_encoding is getting set differently in the
pgAgent-initiated sessions than elsewhere, causing incorrect character set
conversions to happen.

> I'm using windows server 2008, pgsql 10, and pgAgent 3.4, may there be a way 
> of supporting different languages??

... unfortunately, I don't know anything about how to adjust that
in Windows.  A brute-force way would be to issue "set client_encoding"
in your scripts, but there may be a better way.

regards, tom lane

Re: Print pg_lsn as a number?

2018-06-12 Thread Andres Freund
Hi,

On 2018-06-13 09:18:21 +0900, Michael Paquier wrote:
> We map pg_wal_lsn_diff result to a numeric, so that could make sense to
> use numeric as well here, which is not the greatest choice by the way as
> that's an int64 internally, but that's more portable for any
> (unlikely-to-happen) future changes.

The reason to not use int64 is that it's signed. lsns are
unsigned. Therefore you couldn't represent all LSNs without wrapping
into negative.

Greetings,

Andres Freund



Re: Print pg_lsn as a number?

2018-06-12 Thread Michael Paquier
On Tue, Jun 12, 2018 at 10:39:43AM -0700, Andres Freund wrote:
> On 2018-06-12 09:31:49 -0700, Scott Stroupe wrote:
>> According to the documentation[1], pg_lsn is a 64-bit integer that's
>> printed as two hex numbers separated by a slash, e.g. 68/1225BB70. Is
>> there a way to get the 64-bit integer in a common numeric
>> representation instead of the peculiar hex-slash-hex representation?
> 
> What do you want to do with the LSN?

While fixing the slot advance stuff, I would have liked to get an
automatic test able to reproduce the crash where the slot is first
advanced at a page boundary, and then again moved forward.  However it
happens that it is not that easy to do so, so I would be rather
supportive to at least the following operations:
pg_lsn + numeric = pg_lsn.
pg_lsn % numeric = numeric.

We map pg_wal_lsn_diff result to a numeric, so that could make sense to
use numeric as well here, which is not the greatest choice by the way as
that's an int64 internally, but that's more portable for any
(unlikely-to-happen) future changes.

Using the segment size value in pg_settings, you could also advance the
LSN worth a full segment for example...
--
Michael


signature.asc
Description: PGP signature


Re: pg_upgrade 10.2

2018-06-12 Thread Jerry Sievers
Murthy Nunna  writes:



BTW, this message was and remained cross-posted to 3 groups which is
considered bad style around here and I was negligent too in the previous
reply which also went out to all of them.

Please take note.

Thank


-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800



Re: pg_upgrade 10.2

2018-06-12 Thread Jerry Sievers
Murthy Nunna  writes:

> Jerry,
>
> OMG, I think you nailed this... I know what I did. I cut/pasted the
> command from an e-mail... I have seen this issue before with stuff not

Oh!  I suggest you lose that habit ASAP before ever issuing another
command to anything :-)

> related to postgres. But then those commands failed in syntax error
> and then you know what you did wrong.
>
> Similarly, I expect pg_upgrade to throw an error if it finds something it 
> doesn't understand instead of ignoring and causing damage. Don't you agree?

Well, pg_upgrade might never have seen your $silly-dash since possibly
your shell or terminal driver swallowed it.

>
> Thanks for pointing that out. I will redo my upgrade.
>
> -r -v -k -c   --- good flags no utf8
> -r -v -k –c   --- bad flags
>
>
>
>
> -Original Message-
> From: Jerry Sievers [mailto:gsiever...@comcast.net] 
> Sent: Tuesday, June 12, 2018 6:24 PM
> To: Murthy Nunna 
> Cc: Adrian Klaver ; 
> pgsql-general@lists.postgresql.org; pgsql-ad...@lists.postgresql.org; 
> pgsql-performa...@lists.postgresql.org
> Subject: Re: pg_upgrade 10.2
>
> Murthy Nunna  writes:
>
>> Hi Adrian,
>>
>> Port numbers are correct.
>>
>> I moved the position of -c (-p 5433 -P 5434 -c -r -v). Now it is NOT 
>> complaining about old cluster running. However, I am running into a 
>> different problem.
>
> I noted in your earlier message the final -c... the dash was not a regular 
> 7bit ascii char but some UTF or whatever dash char.
>
> I wonder if that's what you fed your shell and it caused a silent parsing 
> issue, eg the -c dropped.
>
> But of course email clients wrap and mangle text like that all sorts of fun 
> ways so lordy knows just what you originally sent :-)
>
> FWIW
>
>
>>
>> New cluster database "ifb_prd_last" is not empty Failure, exiting
>>
>> Note: ifb_prd_last is not new cluster. It is actually old cluster.
>>
>> Is this possibly because in one of my earlier attempts where I 
>> shutdown old cluster and ran pg_upgrade with -c at the end of the 
>> command line. I think -c was ignored and my cluster has been upgraded 
>> in that attempt. Is that possible?
>>
>>
>> -Original Message-
>> From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
>> Sent: Tuesday, June 12, 2018 4:35 PM
>> To: Murthy Nunna ; 
>> pgsql-general@lists.postgresql.org; pgsql-ad...@lists.postgresql.org; 
>> pgsql-performa...@lists.postgresql.org
>> Subject: Re: pg_upgrade 10.2
>>
>> On 06/12/2018 02:18 PM, Murthy Nunna wrote:
>>> pg_upgrade -V
>>> pg_upgrade (PostgreSQL) 10.4
>>> 
>>> pg_upgrade -b /fnal/ups/prd/postgres/v9_3_14_x64/Linux-2-6/bin -B 
>>> /fnal/ups/prd/postgres/v10_4_x64/Linux-2-6/bin -d 
>>> /data0/pgdata/ifb_prd_last -D /data0/pgdata/ifb_prd_last_104 -p 5433 
>>> -P 5434 -r -v –c
>>> 
>>>
>>
>> Looks good to me. The only thing that stands out is that in your original 
>> post you had:
>>
>> -p 5432
>>
>> and above you have:
>>
>> -p 5433
>>
>> Not sure if that makes a difference.
>>
>> The only suggestion I have at the moment is to move -c from the end of the 
>> line to somewhere earlier on the chance that there is a bug that is not 
>> finding it when it's at the end.
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>
> --
> Jerry Sievers
> Postgres DBA/Development Consulting
> e: postgres.consult...@comcast.net
> p: 312.241.7800

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800



Re: pg_upgrade 10.2

2018-06-12 Thread Adrian Klaver

On 06/12/2018 02:49 PM, Murthy Nunna wrote:

Hi Adrian,

Port numbers are correct.

I moved the position of -c (-p 5433 -P 5434 -c -r -v). Now it is NOT 
complaining about old cluster running. However, I am running into a different 
problem.

New cluster database "ifb_prd_last" is not empty
Failure, exiting

Note: ifb_prd_last is not new cluster. It is actually old cluster.

Is this possibly because in one of my earlier attempts where I shutdown old 
cluster and ran pg_upgrade with -c at the end of the command line. I think -c 
was ignored and my cluster has been upgraded in that attempt. Is that possible?


I don't so because it exited before it got the upgrading part.


--
Adrian Klaver
adrian.kla...@aklaver.com



RE: pg_upgrade 10.2

2018-06-12 Thread Murthy Nunna
Jerry,

OMG, I think you nailed this... I know what I did. I cut/pasted the command 
from an e-mail... I have seen this issue before with stuff not related to 
postgres. But then those commands failed in syntax error and then you know what 
you did wrong.

Similarly, I expect pg_upgrade to throw an error if it finds something it 
doesn't understand instead of ignoring and causing damage. Don't you agree?

Thanks for pointing that out. I will redo my upgrade.

-r -v -k -c --- good flags no utf8
-r -v -k –c --- bad flags




-Original Message-
From: Jerry Sievers [mailto:gsiever...@comcast.net] 
Sent: Tuesday, June 12, 2018 6:24 PM
To: Murthy Nunna 
Cc: Adrian Klaver ; 
pgsql-general@lists.postgresql.org; pgsql-ad...@lists.postgresql.org; 
pgsql-performa...@lists.postgresql.org
Subject: Re: pg_upgrade 10.2

Murthy Nunna  writes:

> Hi Adrian,
>
> Port numbers are correct.
>
> I moved the position of -c (-p 5433 -P 5434 -c -r -v). Now it is NOT 
> complaining about old cluster running. However, I am running into a different 
> problem.

I noted in your earlier message the final -c... the dash was not a regular 7bit 
ascii char but some UTF or whatever dash char.

I wonder if that's what you fed your shell and it caused a silent parsing 
issue, eg the -c dropped.

But of course email clients wrap and mangle text like that all sorts of fun 
ways so lordy knows just what you originally sent :-)

FWIW


>
> New cluster database "ifb_prd_last" is not empty Failure, exiting
>
> Note: ifb_prd_last is not new cluster. It is actually old cluster.
>
> Is this possibly because in one of my earlier attempts where I 
> shutdown old cluster and ran pg_upgrade with -c at the end of the 
> command line. I think -c was ignored and my cluster has been upgraded 
> in that attempt. Is that possible?
>
>
> -Original Message-
> From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
> Sent: Tuesday, June 12, 2018 4:35 PM
> To: Murthy Nunna ; 
> pgsql-general@lists.postgresql.org; pgsql-ad...@lists.postgresql.org; 
> pgsql-performa...@lists.postgresql.org
> Subject: Re: pg_upgrade 10.2
>
> On 06/12/2018 02:18 PM, Murthy Nunna wrote:
>> pg_upgrade -V
>> pg_upgrade (PostgreSQL) 10.4
>> 
>> pg_upgrade -b /fnal/ups/prd/postgres/v9_3_14_x64/Linux-2-6/bin -B 
>> /fnal/ups/prd/postgres/v10_4_x64/Linux-2-6/bin -d 
>> /data0/pgdata/ifb_prd_last -D /data0/pgdata/ifb_prd_last_104 -p 5433 
>> -P 5434 -r -v –c
>> 
>>
>
> Looks good to me. The only thing that stands out is that in your original 
> post you had:
>
> -p 5432
>
> and above you have:
>
> -p 5433
>
> Not sure if that makes a difference.
>
> The only suggestion I have at the moment is to move -c from the end of the 
> line to somewhere earlier on the chance that there is a bug that is not 
> finding it when it's at the end.
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com

--
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800


Logging

2018-06-12 Thread Andrew Bartley
Hi all,

Can someone please tell me how to log the values being inserted in this
example..

2018-06-12 22:31:49 UTC-5b2049d4.434ac-STATEMENT:  insert into
"api_consumers" ("consumer_id", "create_datetime") values ($1, $2).

I have tried many different logging options and combinations.

Thanks

Andrew Bartley


RE: pg_upgrade 10.2

2018-06-12 Thread Murthy Nunna
Hi Adrian,

Port numbers are correct.

I moved the position of -c (-p 5433 -P 5434 -c -r -v). Now it is NOT 
complaining about old cluster running. However, I am running into a different 
problem.

New cluster database "ifb_prd_last" is not empty
Failure, exiting

Note: ifb_prd_last is not new cluster. It is actually old cluster.

Is this possibly because in one of my earlier attempts where I shutdown old 
cluster and ran pg_upgrade with -c at the end of the command line. I think -c 
was ignored and my cluster has been upgraded in that attempt. Is that possible?


-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com] 
Sent: Tuesday, June 12, 2018 4:35 PM
To: Murthy Nunna ; pgsql-general@lists.postgresql.org; 
pgsql-ad...@lists.postgresql.org; pgsql-performa...@lists.postgresql.org
Subject: Re: pg_upgrade 10.2

On 06/12/2018 02:18 PM, Murthy Nunna wrote:
> pg_upgrade -V
> pg_upgrade (PostgreSQL) 10.4
> 
> pg_upgrade -b /fnal/ups/prd/postgres/v9_3_14_x64/Linux-2-6/bin -B 
> /fnal/ups/prd/postgres/v10_4_x64/Linux-2-6/bin -d 
> /data0/pgdata/ifb_prd_last -D /data0/pgdata/ifb_prd_last_104 -p 5433 
> -P 5434 -r -v –c
> 
>

Looks good to me. The only thing that stands out is that in your original post 
you had:

-p 5432

and above you have:

-p 5433

Not sure if that makes a difference.

The only suggestion I have at the moment is to move -c from the end of the line 
to somewhere earlier on the chance that there is a bug that is not finding it 
when it's at the end.


--
Adrian Klaver
adrian.kla...@aklaver.com


Re: Out of memory error with PG10.3, 10.4 but not 9.3.19

2018-06-12 Thread Christophe combet
 Le mercredi 16 mai 2018 à 09:48:54 UTC+2, ChatPristi  
a écrit :
 
 
 Dear all,

I have a SELECT command (in partitionned tables) that failed 
with:psql:/tmp/query.txt:1: ERROR:  out of memory
DETAIL:  Cannot enlarge string buffer containing 1073741818 bytes by 32 more 
bytes.

I got the error with PG 10.3 and 10.4 on CentOS 7 up-to-date.The command works 
with a smaller size database.
The command works with the same database with PG 9.3.19 on RHEL 6.9 up-to-date.

I attach the EXPLAIN SELECT command.

Apart rewriting the query is there any parameter that could be changed to make 
the query work in the postgresql.conf ?
Thank you very much for any help.

Hello,
Any ideas where the OOM comes from in 10.4 while 9.3 is working ?

Thanks.

  

Re: pg_upgrade 10.2

2018-06-12 Thread Adrian Klaver

On 06/12/2018 02:18 PM, Murthy Nunna wrote:

pg_upgrade -V
pg_upgrade (PostgreSQL) 10.4

pg_upgrade -b /fnal/ups/prd/postgres/v9_3_14_x64/Linux-2-6/bin -B 
/fnal/ups/prd/postgres/v10_4_x64/Linux-2-6/bin -d /data0/pgdata/ifb_prd_last -D 
/data0/pgdata/ifb_prd_last_104 -p 5433 -P 5434 -r -v –c




Looks good to me. The only thing that stands out is that in your 
original post you had:


-p 5432

and above you have:

-p 5433

Not sure if that makes a difference.

The only suggestion I have at the moment is to move -c from the end of 
the line to somewhere earlier on the chance that there is a bug that is 
not finding it when it's at the end.



--
Adrian Klaver
adrian.kla...@aklaver.com



RE: pg_upgrade 10.2

2018-06-12 Thread Murthy Nunna
pg_upgrade -V
pg_upgrade (PostgreSQL) 10.4

pg_upgrade -b /fnal/ups/prd/postgres/v9_3_14_x64/Linux-2-6/bin -B 
/fnal/ups/prd/postgres/v10_4_x64/Linux-2-6/bin -d /data0/pgdata/ifb_prd_last -D 
/data0/pgdata/ifb_prd_last_104 -p 5433 -P 5434 -r -v –c


-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com] 
Sent: Tuesday, June 12, 2018 4:13 PM
To: Murthy Nunna ; pgsql-general@lists.postgresql.org; 
pgsql-ad...@lists.postgresql.org; pgsql-performa...@lists.postgresql.org
Subject: Re: pg_upgrade 10.2

On 06/12/2018 01:58 PM, Murthy Nunna wrote:
> Thanks Adrian.
> I removed "-k" flag. But still got same error.
> 
> There seems to be a postmaster servicing the old cluster.
> Please shutdown that postmaster and try again.
> Failure, exiting
> 

Well according to the code in pg_upgrade.c that message should not be reached 
when the check option is specified:

if (!user_opts.check)
 pg_fatal("There seems to be a postmaster servicing the old cluster.\n"
 "Please shutdown that postmaster and try again.\n"); else
 *live_check = true;

Can we see the actual command you ran?


--
Adrian Klaver
adrian.kla...@aklaver.com


Re: pg_upgrade 10.2

2018-06-12 Thread Adrian Klaver

On 06/12/2018 01:58 PM, Murthy Nunna wrote:

Thanks Adrian.
I removed "-k" flag. But still got same error.

There seems to be a postmaster servicing the old cluster.
Please shutdown that postmaster and try again.
Failure, exiting



Well according to the code in pg_upgrade.c that message should not be 
reached when the check option is specified:


if (!user_opts.check)
pg_fatal("There seems to be a postmaster servicing the old 
cluster.\n"

"Please shutdown that postmaster and try again.\n");
else
*live_check = true;

Can we see the actual command you ran?


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: pg_upgrade and wraparound

2018-06-12 Thread Alexander Shutyaev
 Back again,

>> Alexander, could you hack things up so autovacuum logging is enabled
>> (log_autovacuum_min_duration=0), and see whether it's triggered?

I've changed this config setting in both 9.6 and 10.4 postgresql.conf, then
I've ran pg_upgrade once more.

However I'm not sure how can I see whether autovacuum was triggered or not.
I've tried grepping the logs for lines containing both 'vacuum' and 'auto'
(case-insensitive) - there were none. If you can be more specific, I can
look for anything else.

I've googled on how can one see that the autovacuum is working, and found
out this query, which I ran on the 10.4 cluster:

select count(*) from pg_stat_all_tables where last_autovacuum is not null;
 count
---
 0
(1 row)

So it seems autovacuum is indeed not working, just as you proposed.

If I correctly summarized all your responses, the problem is that:

1) pg_restore (as part of pg_upgrade) inserts each large object in a
different transaction

That seems true to me given the log output - each time an object is
inserted the wraparound warning decrements by 1

2) the autovacuum doesn't work while the database is restored

That also seems true (see above)

3) the number of large objects is so big that as they are restored the
transaction wraparound occurs

Here's the number of large objects taken from the 9.6 cluster (spaces added
manually for clarity):

select count(*) from pg_largeobject_metadata ;
   count
---
 133 635 871
(1 row)

If I've googled correctly - the transaction number is a 32bit integer so
it's limit is 2 147 483 647 which is a lot more. I guess I'm missing
something.

This is just my attempt to summarize our progress so far.

I'm further open to your suggestions.

2018-06-12 14:32 GMT+03:00 Daniel Verite :

> Andres Freund wrote:
>
> > I'm not entirely clear why pg_restore appears to use a separate
> > transaction for each large object, surely exascerbating the problem.
>
> To make sure that per-object locks don't fill up the shared
> lock table?
> There might be hundreds of thousands of large objects.
> If it had to restore N objects per transaction, would it know
> how to compute N that is large enough to be effective
> and small enough not to exhaust the shared table?
>
> Best regards,
> --
> Daniel Vérité
> PostgreSQL-powered mailer: http://www.manitou-mail.org
> Twitter: @DanielVerite
>


RE: pg_upgrade 10.2

2018-06-12 Thread Murthy Nunna
Thanks Adrian.
I removed "-k" flag. But still got same error.

There seems to be a postmaster servicing the old cluster.
Please shutdown that postmaster and try again.
Failure, exiting

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com] 
Sent: Tuesday, June 12, 2018 3:48 PM
To: Murthy Nunna ; pgsql-general@lists.postgresql.org; 
pgsql-ad...@lists.postgresql.org; pgsql-performa...@lists.postgresql.org
Subject: Re: pg_upgrade 10.2

On 06/12/2018 01:34 PM, Murthy Nunna wrote:
> In older versions of pg_upgrade (e.g from 9.2 to 9.3), I was able to 
> run pg_upgrade without stopping old cluster using the check flag.
> 
> pg_upgrade -b  -B  -d  -D  -p 
> 5432 -P 5434 -r -v -k -c
> 
> Note the "c" flag at the end

I take the below to it mean it should work:

https://urldefense.proofpoint.com/v2/url?u=https-3A__www.postgresql.org_docs_10_static_pgupgrade.html=DwID-g=gRgGjJ3BkIsb5y6s49QqsA=0wrsmPzpZSao0v32yCcG2Q=g2e1NMngBLIcEgi5UjlCHkyJ5zK1Su-vsaRw0Y9N0Dc=PDVmjA_uW6cJvV4lWR8vgkiArplzgd5Rs4taLA6ZY6Q=
> "You can use pg_upgrade --check to perform only the checks, even if 
> the
old server is still running. pg_upgrade --check will also outline any manual 
adjustments you will need to make after the upgrade. If you are going to be 
using link mode, you should use the --link option with --check to enable 
link-mode-specific checks."

Might want to try without -k to see what happens.

More comments below.

> However pg_upgrade in 10 (I tried from 9.3 to 10.4), when I did not 
> stop the old cluster, the upgrade failed:
> 
> ***
> 
> There seems to be a postmaster servicing the old cluster.
> 
> Please shutdown that postmaster and try again.
> 
> Failure, exiting
> 
> Is this expected?
> 
> Also, when I stopped the old cluster and ran pg_upgrade with "-c" 
> flag, the file global/pg_control got renamed to global/pg_control.old. 
> The "-c" flag never renamed anything in the old cluster in older 
> pg_upgrade

Again seems related to -k:

"
If you ran pg_upgrade without --link or did not start the new server, the old 
cluster was not modified except that, if linking started, a .old suffix was 
appended to $PGDATA/global/pg_control. To reuse the old cluster, possibly 
remove the .old suffix from $PGDATA/global/pg_control; you can then restart the 
old cluster.
"
> 


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: pg_upgrade 10.2

2018-06-12 Thread Adrian Klaver

On 06/12/2018 01:34 PM, Murthy Nunna wrote:
In older versions of pg_upgrade (e.g from 9.2 to 9.3), I was able to run 
pg_upgrade without stopping old cluster using the check flag.


pg_upgrade -b  -B  -d  -D  -p 5432 
-P 5434 -r -v -k -c


Note the “c” flag at the end


I take the below to it mean it should work:

https://www.postgresql.org/docs/10/static/pgupgrade.html
"You can use pg_upgrade --check to perform only the checks, even if the 
old server is still running. pg_upgrade --check will also outline any 
manual adjustments you will need to make after the upgrade. If you are 
going to be using link mode, you should use the --link option with 
--check to enable link-mode-specific checks."


Might want to try without -k to see what happens.

More comments below.

However pg_upgrade in 10 (I tried from 9.3 to 10.4), when I did not stop 
the old cluster, the upgrade failed:


***

There seems to be a postmaster servicing the old cluster.

Please shutdown that postmaster and try again.

Failure, exiting

Is this expected?

Also, when I stopped the old cluster and ran pg_upgrade with “-c” flag, 
the file global/pg_control got renamed to global/pg_control.old. The 
“-c” flag never renamed anything in the old cluster in older pg_upgrade


Again seems related to -k:

"
If you ran pg_upgrade without --link or did not start the new server, 
the old cluster was not modified except that, if linking started, a .old 
suffix was appended to $PGDATA/global/pg_control. To reuse the old 
cluster, possibly remove the .old suffix from $PGDATA/global/pg_control; 
you can then restart the old cluster.

"





--
Adrian Klaver
adrian.kla...@aklaver.com



pg_upgrade 10.2

2018-06-12 Thread Murthy Nunna
In older versions of pg_upgrade (e.g from 9.2 to 9.3), I was able to run 
pg_upgrade without stopping old cluster using the check flag.

pg_upgrade -b  -B  -d  -D  -p 5432 -P 
5434 -r -v -k -c

Note the "c" flag at the end

However pg_upgrade in 10 (I tried from 9.3 to 10.4), when I did not stop the 
old cluster, the upgrade failed:

***
There seems to be a postmaster servicing the old cluster.
Please shutdown that postmaster and try again.
Failure, exiting

Is this expected?

Also, when I stopped the old cluster and ran pg_upgrade with "-c" flag, the 
file global/pg_control got renamed to global/pg_control.old. The "-c" flag 
never renamed anything in the old cluster in older pg_upgrade




Re: Print pg_lsn as a number?

2018-06-12 Thread Scott Stroupe


> On Jun 12, 2018, at 10:39 AM, Andres Freund  wrote:
> 
> Hi,
> 
> On 2018-06-12 09:31:49 -0700, Scott Stroupe wrote:
>> According to the documentation[1], pg_lsn is a 64-bit integer that's
>> printed as two hex numbers separated by a slash, e.g. 68/1225BB70. Is
>> there a way to get the 64-bit integer in a common numeric
>> representation instead of the peculiar hex-slash-hex representation?
> 
> What do you want to do with the LSN?
> 
> Greetings,
> 
> Andres Freund


I want to put pg_lsns into a tool that only understands common numeric 
representations, to do simple math and comparisons.

Regards,
Scott




Re: What does Natvie Posgres mean?

2018-06-12 Thread Ron
This, to me, is the true meaning of "native PostgreSQL" (as opposed to 
"stock PostgreSQL", which is uncustomized code).  However, if the job wanted 
post was written by an HR flunky, it could mean anything.



On 06/12/2018 01:11 PM, Benjamin Scherrey wrote:
In my experience it refers to *development directly via SQL against the 
Postgres server* rather than via an ORM like Django or the like. A 
remarkably high percentage of applications backed by Postgres have been 
written by developers that have never actually seen or written SQL code 
directly. It's all generated (often quite naively) by the object 
relational mapper. Requesting "native" developers means that they want you 
to understand how the DB actually behaves and to be able to generate 
optimal SQL code and proper DDLs that fit the application domain correctly.


  - - Ben Scherrey

On Wed, Jun 13, 2018, 12:59 AM David G. Johnston 
mailto:david.g.johns...@gmail.com>> wrote:


On Tuesday, June 12, 2018, bto...@computer.org
 mailto:bto...@broadstripe.net>> wrote:


When someone, e.g., as appeared in a recent and some older
pgsql-jobs messages, says "Native Postgres", what do you suppose
that means?

Does it mean something different than just "PostgreSQL"?


Likely it means the open source product built directly from the source
code published here (or packages derived there-from).  As opposed to
say AWS RDS or EnterpriseDB or various other forks of the product
available in the wild.

David J.



--
Angular momentum makes the world go 'round.


Re: ERROR: found multixact from before relminmxid

2018-06-12 Thread Andres Freund
Hi,

On 2018-06-08 13:30:33 -0500, Jeremy Finzel wrote:
>Avoid spuriously marking pages as all-visible (Dan Wood, Pavan Deolasee,
>Álvaro Herrera)
> 
>This could happen if some tuples were locked (but not deleted). While
>queries would still function correctly, vacuum would normally ignore such
>pages, with the long-term effect that the tuples were never frozen. In
>recent releases this would eventually result in errors such as "found
>multixact n from before relminmxid n".

Oh, I already had forgotten about that one... It does sound like a
likely explanation of your issue. Hard to tell without investigating a
*lot* more closely than I realistically can do remotely.  It seems quite
possible to be the cause - I'd strongly suggest to upgrade to prevent
further occurances, or at least exclude it as a cause.

Greetings,

Andres Freund



Re: Transparent partitioning

2018-06-12 Thread Adrian Klaver

On 06/12/2018 11:12 AM, Bráulio Bhavamitra wrote:

Hi postgresql developers,

Are there any plans to have transparent partitioning through clustered 
indexes in such a way that a table is partitioned in the backend 
according to a clustering index without any further user configuration?


Something like this?:

https://www.postgresql.org/docs/10/static/sql-createtable.html

"PARTITION BY { RANGE | LIST } ( { column_name | ( expression ) } [ 
opclass ] [, ...] )


The optional PARTITION BY clause specifies a strategy of 
partitioning the table. The table thus created is called a partitioned 
table. The parenthesized list of columns or expressions forms the 
partition key for the table. When using range partitioning, the 
partition key can include multiple columns or expressions (up to 32, but 
this limit can be altered when building PostgreSQL), but for list 
partitioning, the partition key must consist of a single column or 
expression. If no B-tree operator class is specified when creating a 
partitioned table, the default B-tree operator class for the datatype 
will be used. If there is none, an error will be reported.


A partitioned table is divided into sub-tables (called partitions), 
which are created using separate CREATE TABLE commands. The partitioned 
table is itself empty. A data row inserted into the table is routed to a 
partition based on the value of columns or expressions in the partition 
key. If no existing partition matches the values in the new row, an 
error will be reported.


Partitioned tables do not support UNIQUE, PRIMARY KEY, EXCLUDE, or 
FOREIGN KEY constraints; however, you can define these constraints on 
individual partitions.

"



Best regards,
Braulio Oliveira



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: PG on AWS RDS and IAM authentication

2018-06-12 Thread Jernigan, Kevin
We are working on adding IAM authentication for both RDS for PostgreSQL and 
Aurora PostgreSQL, and expect to release support for both soon.

Thanks,

-- 
Kevin Jernigan
Senior Product Manager
Amazon Aurora PostgreSQL
1-415-710-8828 (m)
k...@amazon.com
27 Melcher Street
Boston, MA  02210
On 6/12/18, 1:11 PM, "Adrian Klaver"  wrote:

On 06/12/2018 08:45 AM, Ravi Krishna wrote:
> 
> 
> 
> As per https://forums.aws.amazon.com/thread.jspa?threadID=258822=0 
there was no IAM authentication for PG on AWS RDS. (It is there for MySQL).
> However the link is a year old. Has it changed since then?  Can we use 
IAM authentication for PG.

IAM is a AWS feature, so you should probably talk to them to get the 
official word.

The below would seem to indicate it is not possible:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html

> 
> Thanks
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com





Transparent partitioning

2018-06-12 Thread Bráulio Bhavamitra
Hi postgresql developers,

Are there any plans to have transparent partitioning through clustered
indexes in such a way that a table is partitioned in the backend according
to a clustering index without any further user configuration?

Best regards,
Braulio Oliveira


Re: What does Natvie Posgres mean?

2018-06-12 Thread Benjamin Scherrey
In my experience it refers to development directly via SQL against the
Postgres server rather than via an ORM like Django or the like. A
remarkably high percentage of applications backed by Postgres have been
written by developers that have never actually seen or written SQL code
directly. It's all generated (often quite naively) by the object relational
mapper. Requesting "native" developers means that they want you to
understand how the DB actually behaves and to be able to generate optimal
SQL code and proper DDLs that fit the application domain correctly.

  - - Ben Scherrey

On Wed, Jun 13, 2018, 12:59 AM David G. Johnston 
wrote:

> On Tuesday, June 12, 2018, bto...@computer.org 
> wrote:
>
>>
>> When someone, e.g., as appeared in a recent and some older pgsql-jobs
>> messages, says "Native Postgres", what do you suppose that means?
>>
>> Does it mean something different than just "PostgreSQL"?
>>
>
> Likely it means the open source product built directly from the source
> code published here (or packages derived there-from).  As opposed to say
> AWS RDS or EnterpriseDB or various other forks of the product available in
> the wild.
>
> David J.
>
>


Re: Print pg_lsn as a number?

2018-06-12 Thread Francisco Olarte
On Tue, Jun 12, 2018 at 6:31 PM, Scott Stroupe  wrote:
> According to the documentation[1], pg_lsn is a 64-bit integer that's printed 
> as two hex numbers separated by a slash, e.g. 68/1225BB70. Is there a way to 
> get the 64-bit integer in a common numeric representation instead of the 
> peculiar hex-slash-hex representation?
...
> [1] https://www.postgresql.org/docs/current/static/datatype-pg-lsn.html

Quoting your own [1] ref :"Two LSNs can be subtracted using the -
operator; the result is the number of bytes separating those
write-ahead log locations."

You can try substraction  from an arbitrary origin ( pg_lsn('0/0')
seems nice, as arbitrary as Greenwich meridian ), and it worked for me
in

select
  pg_lsn('68/1225BB70')
, pg_lsn('0/0')
, pg_lsn('68/1225BB70') - pg_lsn('0/0')
, to_hex((pg_lsn('68/1225BB70') - pg_lsn('0/0'))::bigint)
;

( http://sqlfiddle.com/#!17/9eecb/16272 )

Reconstructing via simple addition does not work, but you can do
division, modulus, double to_hex, join with '/', cast to pg_lsn if you
like.

Francisco Olarte.



Re: What does Natvie Posgres mean?

2018-06-12 Thread Adrian Klaver

On 06/12/2018 10:53 AM, bto...@computer.org wrote:


When someone, e.g., as appeared in a recent and some older pgsql-jobs messages, says 
"Native Postgres", what do you suppose that means?


I'm going to say it means the community supported versions, not 
offshoots or forks e.g. AWS RDS Postgres, EDB non-community versions, etc.


Does it mean something different than just "PostgreSQL"?

Is the word "Native" just noise, or does it actually refer to something 
specific?

A quick google search reveals that there apparently is a thing called "postgres-native" 
which is described as "...a native D implementation of the Postgres frontend/backend 
protocol.", but I don't get the sense that this is what is being talked about.

-- B








--
Adrian Klaver
adrian.kla...@aklaver.com



What does Natvie Posgres mean?

2018-06-12 Thread bto...@computer.org


When someone, e.g., as appeared in a recent and some older pgsql-jobs messages, 
says "Native Postgres", what do you suppose that means? 

Does it mean something different than just "PostgreSQL"?

Is the word "Native" just noise, or does it actually refer to something 
specific?

A quick google search reveals that there apparently is a thing called 
"postgres-native" which is described as "...a native D implementation of the 
Postgres frontend/backend protocol.", but I don't get the sense that this is 
what is being talked about.

-- B







Re: Print pg_lsn as a number?

2018-06-12 Thread Andres Freund
Hi,

On 2018-06-12 09:31:49 -0700, Scott Stroupe wrote:
> According to the documentation[1], pg_lsn is a 64-bit integer that's
> printed as two hex numbers separated by a slash, e.g. 68/1225BB70. Is
> there a way to get the 64-bit integer in a common numeric
> representation instead of the peculiar hex-slash-hex representation?

What do you want to do with the LSN?

Greetings,

Andres Freund



Re: Software that can automatically make sense of a DB's tables and ID names

2018-06-12 Thread Adrian Klaver

On 06/12/2018 10:14 AM, Philip Rhoades wrote:

People,

I haven't really been keeping up with what is happening in the PG world 
- now I have an old Discourse DB that I want to extract some of the 
categories and topics from to insert into the current setup.  What I was 
wondering is if anyone has developed software in recent years that can 
look at the DB as a whole and automatically make sense of how all the 
tables relate to each other - assuming a sensible naming convention has 
been used for IDs etc - and allows one to browse the tables easily 
without having to manually type lots of SQL statements.


There are but the paragraph below seems to indicate you want something else.



Discourse now has a utility for exporting and importing to do this sort 
of stuff but I don't have a running setup for the old data - just a data 
dump . .


So you want a tool to look at a data dump and build SQL?


Thanks,

Phil.



--
Adrian Klaver
adrian.kla...@aklaver.com



Software that can automatically make sense of a DB's tables and ID names

2018-06-12 Thread Philip Rhoades

People,

I haven't really been keeping up with what is happening in the PG world 
- now I have an old Discourse DB that I want to extract some of the 
categories and topics from to insert into the current setup.  What I was 
wondering is if anyone has developed software in recent years that can 
look at the DB as a whole and automatically make sense of how all the 
tables relate to each other - assuming a sensible naming convention has 
been used for IDs etc - and allows one to browse the tables easily 
without having to manually type lots of SQL statements.


Discourse now has a utility for exporting and importing to do this sort 
of stuff but I don't have a running setup for the old data - just a data 
dump . .


Thanks,

Phil.
--
Philip Rhoades

PO Box 896
Cowra  NSW  2794
Australia
E-mail:  p...@pricom.com.au



Re: PG on AWS RDS and IAM authentication

2018-06-12 Thread Adrian Klaver

On 06/12/2018 08:45 AM, Ravi Krishna wrote:




As per https://forums.aws.amazon.com/thread.jspa?threadID=258822=0 there 
was no IAM authentication for PG on AWS RDS. (It is there for MySQL).
However the link is a year old. Has it changed since then?  Can we use IAM 
authentication for PG.


IAM is a AWS feature, so you should probably talk to them to get the 
official word.


The below would seem to indicate it is not possible:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html



Thanks




--
Adrian Klaver
adrian.kla...@aklaver.com



Print pg_lsn as a number?

2018-06-12 Thread Scott Stroupe
According to the documentation[1], pg_lsn is a 64-bit integer that's printed as 
two hex numbers separated by a slash, e.g. 68/1225BB70. Is there a way to get 
the 64-bit integer in a common numeric representation instead of the peculiar 
hex-slash-hex representation?

Thanks,
Scott

[1] https://www.postgresql.org/docs/current/static/datatype-pg-lsn.html




Re: Bad performance with cascaded deletes

2018-06-12 Thread Don Seiler
On Tue, Jun 12, 2018 at 10:48 AM, Don Seiler  wrote:

> On Tue, Jun 12, 2018 at 2:26 AM, Haug Bürger 
> wrote:
>
>>
>> "availability_cache_ac_session_id_fkey" FOREIGN KEY (ac_session_id)
>> REFERENCES zpg_data.session(id) ON DELETE CASCADE
>>
>>
> Do you have an index on availability_cache.ac_session_id? These fields
> are not automatically indexed and that can lead to horrible performance on
> cascading operations like this.
>

I'm blind apparently, it's your PK.

-- 
Don Seiler
www.seiler.us


Re: Bad performance with cascaded deletes

2018-06-12 Thread Don Seiler
On Tue, Jun 12, 2018 at 2:26 AM, Haug Bürger 
wrote:

>
> "availability_cache_ac_session_id_fkey" FOREIGN KEY (ac_session_id)
> REFERENCES zpg_data.session(id) ON DELETE CASCADE
>
>
Do you have an index on availability_cache.ac_session_id? These fields are
not automatically indexed and that can lead to horrible performance on
cascading operations like this.

-- 
Don Seiler
www.seiler.us


PG on AWS RDS and IAM authentication

2018-06-12 Thread Ravi Krishna




As per https://forums.aws.amazon.com/thread.jspa?threadID=258822=0 there 
was no IAM authentication for PG on AWS RDS. (It is there for MySQL).
However the link is a year old. Has it changed since then?  Can we use IAM 
authentication for PG.

Thanks


Re: Semantics around INSERT INTO with SELECT and ORDER BY.

2018-06-12 Thread Adrian Klaver

On 06/12/2018 01:39 AM, Steve Krenzel wrote:
This is relevant for tables that have a column with a SERIAL type, I 
need to guarantee that the relative ordering remains the same as the 
ordering of the selected result set.


More concretely, given:

     > CREATE TABLE foo (id SERIAL, val TEXT);
     > CREATE TABLE bar (id SERIAL, val TEXT);
     > INSERT INTO foo (val) VALUES ('A'), ('B'), ('C');
     > TABLE foo;
      id | val
     +-
       1 | A
       2 | B
       3 | C
     (3 rows)

Then,

     > INSERT INTO bar (val) SELECT val FROM foo ORDER BY id DESC;
     > TABLE bar;
      id | val
     +-
       1 | C
       2 | B
       3 | A
     (3 rows)

The rows should be inserted in reverse. (Note: I don't care about the 
actual value of the id, only the relative ordering).


Inserting more values should similarly append into the table in order 
(where "append" is used in terms of the serial id).


     > INSERT INTO bar (val) SELECT val FROM foo ORDER BY id DESC;
     > TABLE bar;
      id | val
     +-
       1 | C
       2 | B
       3 | A
       4 | C
       5 | B
       6 | A
     (6 rows)

Or to put it another way, I want to select values from one table ordered 
by complex criteria and insert them into another table. I want to be 
able to retrieve the rows from the target table in the same order they 
were inserted, but I don't care about the specific ordering criteria. I 
only care about the order they were inserted.


That will only work until some other INSERT or UPDATE occurs. Using 
table from your example:


UPDATE bar SET val = 'C1' where id = 1;

TABLE bar;
 id | val
+-
  2 | B
  3 | A
  4 | C
  5 | B
  6 | A
  1 | C1
(6 rows)

You can use CLUSTER:

https://www.postgresql.org/docs/10/static/sql-cluster.html

to reestablish order based on an index, though that has the same issue:

"Clustering is a one-time operation: when the table is subsequently 
updated, the changes are not clustered."


As has been pointed out order of rows is not guaranteed.




On Tue, Jun 12, 2018 at 1:24 AM Ravi Krishna > wrote:


Why is it even important?  Once you use ORDER BY clause, you are
guaranteed to get the rows in the order.  Why do you need how it was
inserted in the first place.




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Semantics around INSERT INTO with SELECT and ORDER BY.

2018-06-12 Thread David G. Johnston
On Tuesday, June 12, 2018, Steve Krenzel  wrote:

> This is relevant for tables that have a column with a SERIAL type, I need
> to guarantee that the relative ordering remains the same as the ordering of
> the selected result set.
>

The logical insertion order, and thus the sequence values, will be assigned
according to the order by.

David J.


Re: pg_upgrade and wraparound

2018-06-12 Thread Daniel Verite
Andres Freund wrote:

> I'm not entirely clear why pg_restore appears to use a separate
> transaction for each large object, surely exascerbating the problem.

To make sure that per-object locks don't fill up the shared
lock table?
There might be hundreds of thousands of large objects.
If it had to restore N objects per transaction, would it know
how to compute N that is large enough to be effective
and small enough not to exhaust the shared table?

Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite



Re: Add to watchdog cluster request is rejected by node

2018-06-12 Thread Bo Peng
Hi,

I think it would be better to make questions in the Pgpool-II dedicated 
mailing list pgpool-gene...@pgpool.net.

> 3240: WARNING:  checking setuid bit of if_up_cmd
> Jun 11 08:57:04 asa-pgpool02 pgpool[3240]: [1-2] 2018-06-11 08:57:04: pid
> 3240: DETAIL:  ifup[/sbin/ip] doesn't have setuid bit

It seems like that the parameters "if_up_cmd" and "if_down_cmd" 
are not set correctly
Please make sure you set correct network interface name.

For example, if your network interface name is "ens192",
you need set that like:

---
if_up_cmd   = 'ip addr add $_IP_$/24 dev ens192 label ens192:0'
if_down_cmd = 'ip addr del $_IP_$/24 dev ens192'  
arping_cmd  = 'arping -U $_IP_$ -w 1 -I ens192'
---

On Mon, 11 Jun 2018 15:06:35 +0200
Jean Claude  wrote:

> Hi,
> 
> The following error is a bug ?
> 
> Jun 11 08:57:04 asa-pgpool02 pgpool[3240]: [1-1] 2018-06-11 08:57:04: pid
> 3240: WARNING:  checking setuid bit of if_up_cmd
> Jun 11 08:57:04 asa-pgpool02 pgpool[3240]: [1-2] 2018-06-11 08:57:04: pid
> 3240: DETAIL:  ifup[/sbin/ip] doesn't have setuid bit
> Jun 11 08:57:04 asa-pgpool02 pgpool[3240]: [2-1] 2018-06-11 08:57:04: pid
> 3240: WARNING:  checking setuid bit of if_down_cmd
> Jun 11 08:57:04 asa-pgpool02 pgpool[3240]: [2-2] 2018-06-11 08:57:04: pid
> 3240: DETAIL:  ifdown[/sbin/ip] doesn't have setuid bit
> Jun 11 08:57:04 asa-pgpool02 pgpool[3240]: [3-1] 2018-06-11 08:57:04: pid
> 3240: LOG:  waiting for watchdog to initialize
> Jun 11 08:57:04 asa-pgpool02 pgpool[3241]: [3-1] 2018-06-11 08:57:04: pid
> 3241: LOG:  setting the local watchdog node name to "
> asa-pgpool02.adm.cacc.ch:5432 Linux asa-pgpool02"
> Jun 11 08:57:04 asa-pgpool02 pgpool[3241]: [4-1] 2018-06-11 08:57:04: pid
> 3241: LOG:  watchdog cluster is configured with 1 remote nodes
> Jun 11 08:57:04 asa-pgpool02 pgpool[3241]: [5-1] 2018-06-11 08:57:04: pid
> 3241: LOG:  watchdog remote node:0 on asa-pgpool01.adm.cacc.ch:9000
> Jun 11 08:57:04 asa-pgpool02 pgpool[3241]: [6-1] 2018-06-11 08:57:04: pid
> 3241: LOG:  interface monitoring is disabled in watchdog
> Jun 11 08:57:04 asa-pgpool02 pgpool[3241]: [7-1] 2018-06-11 08:57:04: pid
> 3241: LOG:  watchdog node state changed from [DEAD] to [LOADING]
> Jun 11 08:57:04 asa-pgpool02 pgpool[3241]: [8-1] 2018-06-11 08:57:04: pid
> 3241: LOG:  new outbound connection to asa-pgpool01.adm.cacc.ch:9000
> Jun 11 08:57:04 asa-pgpool02 pgpool[3241]: [9-1] 2018-06-11 08:57:04: pid
> 3241: *FATAL:  Add to watchdog cluster request is rejected by node
> "asa-pgpool01.adm.cacc.ch:9000 "*
> 
> pgpool-II version :
> pgpool-II version 3.7.3 (amefuriboshi)
> 
> Thanks.


-- 
Bo Peng 
SRA OSS, Inc. Japan




Re: Semantics around INSERT INTO with SELECT and ORDER BY.

2018-06-12 Thread Ravi Krishna
>Or to put it another way, I want to select values from one table ordered
by
>complex criteria and insert them into another table. I want to be able to
>retrieve the rows from the target table in the same order they were
inserted,
>but I don't care about the specific ordering criteria. I only care about
the order they were inserted.

As I understand, your business requirement is to retrieve the rows from the
target
table the same way they were inserted.  The one and only way to achieve it
is
to use the same ORDER by clause to SELECT from target, what it was used
to insert into target. In your case, the insert is
 INSERT INTO bar (val) SELECT val FROM foo ORDER BY id DESC;
So the SELECT should also be ORDER BY ID desc

Just don't care on how it inserts and stores row internally.


Re: Bad performance with cascaded deletes

2018-06-12 Thread Laurenz Albe
Haug Bürger wrote:
> Delete on zpg_data.session  (cost=190.51..4491.20 rows=500 width=46)
> ...
> Planning time: 0.222 ms
>  Trigger RI_ConstraintTrigger_a_16481 for constraint
> availability_cache_ac_session_id_fkey: time=350.116 calls=500
> 
> For me this reads like the delete takes 0.2ms and the cascaded delete
> takes 350ms.

Could you share the complete plan?

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: Semantics around INSERT INTO with SELECT and ORDER BY.

2018-06-12 Thread Steve Krenzel
This is relevant for tables that have a column with a SERIAL type, I need
to guarantee that the relative ordering remains the same as the ordering of
the selected result set.

More concretely, given:

> CREATE TABLE foo (id SERIAL, val TEXT);
> CREATE TABLE bar (id SERIAL, val TEXT);
> INSERT INTO foo (val) VALUES ('A'), ('B'), ('C');
> TABLE foo;
 id | val
+-
  1 | A
  2 | B
  3 | C
(3 rows)

Then,

> INSERT INTO bar (val) SELECT val FROM foo ORDER BY id DESC;
> TABLE bar;
 id | val
+-
  1 | C
  2 | B
  3 | A
(3 rows)

The rows should be inserted in reverse. (Note: I don't care about the
actual value of the id, only the relative ordering).

Inserting more values should similarly append into the table in order
(where "append" is used in terms of the serial id).

> INSERT INTO bar (val) SELECT val FROM foo ORDER BY id DESC;
> TABLE bar;
 id | val
+-
  1 | C
  2 | B
  3 | A
  4 | C
  5 | B
  6 | A
(6 rows)

Or to put it another way, I want to select values from one table ordered by
complex criteria and insert them into another table. I want to be able to
retrieve the rows from the target table in the same order they were
inserted, but I don't care about the specific ordering criteria. I only
care about the order they were inserted.

On Tue, Jun 12, 2018 at 1:24 AM Ravi Krishna 
wrote:

> Why is it even important?  Once you use ORDER BY clause, you are
> guaranteed to get the rows in the order.  Why do you need how it was
> inserted in the first place.
>


Re: Semantics around INSERT INTO with SELECT and ORDER BY.

2018-06-12 Thread Ravi Krishna
Why is it even important?  Once you use ORDER BY clause, you are guaranteed
to get the rows in the order.  Why do you need how it was inserted in the
first place.


Semantics around INSERT INTO with SELECT and ORDER BY.

2018-06-12 Thread Steve Krenzel
If I insert using the results of a select statement, are the inserts
guaranteed to happen in the order of the rows returned from the select?

That is, are these two equivalent:

INSERT INTO  SELECT  FROM  ORDER BY  DESC;

And:

FOR row IN SELECT  FROM  ORDER BY  DESC LOOP
INSERT INTO  VALUES (row.);
END LOOP;

I read through the SQL spec on insertion but they don't address insertion
ordering (perhaps, purposefully).

Any clarification here would be super helpful.

Thank you!
Steve


Does pgAgent support chinese, japanese characters?

2018-06-12 Thread a
Hi I'm using pgAgent to set up daily or monthly tasks. 


I have tables that with the name of Chinese or Japanese. It runs ok in normal 
sql script, but report non-recognizable in pgAgent SQL jobs.


I'm using windows server 2008, pgsql 10, and pgAgent 3.4, may there be a way of 
supporting different languages??


Thanks

Bad performance with cascaded deletes

2018-06-12 Thread Haug Bürger
I have an issue with delete performance I can't explain.

Delete on zpg_data.session  (cost=190.51..4491.20 rows=500 width=46)
...
Planning time: 0.222 ms
 Trigger RI_ConstraintTrigger_a_16481 for constraint
availability_cache_ac_session_id_fkey: time=350.116 calls=500

For me this reads like the delete takes 0.2ms and the cascaded delete
takes 350ms. There is a primary key (index) on the availability_cache
table and if I delete it manually deleting with an id is pretty fast.
For me it looks like the trigger doesn't use the primary key/index.

Any ideas why the trigger is slow or how to get a plan for the trigger?

Thanks for help
Haug


Table "zpg_data.session"
  Column   |   Type   | Nullable
|   Default| Storage  |
---+--+--+--+--+
 id| uuid | not null
|  | plain|
 ... rows deleted ...
Indexes:
"pk_session" PRIMARY KEY, btree (id)
Referenced by:
TABLE "zpg_data.availability_cache" CONSTRAINT
"availability_cache_ac_session_id_fkey" FOREIGN KEY (ac_session_id)
REFERENCES zpg_data.session(id) ON DELETE CASCADE


Table "zpg_data.availability_cache"
Column |   Type   | Nullable |   Default   |
Storage  |
---+--+--+-+--+
 ac_session_id | uuid | not null | |
plain|
 ac_created| timestamp with time zone | not null | |
plain|
 ac_content| jsonb| not null | '{}'::jsonb |
extended |
Indexes:
"pk_availability_cache" PRIMARY KEY, btree (ac_session_id)
Foreign-key constraints:
"availability_cache_ac_session_id_fkey" FOREIGN KEY (ac_session_id)
REFERENCES zpg_data.session(id) ON DELETE CASCADE