Re: [GENERAL] plpgsql function with offset - Postgres 9.1

2017-06-15 Thread Peter J. Holzer
On 2017-06-16 10:19:45 +1200, Patrick B wrote:
> 2017-05-29 19:27 GMT+12:00 Albe Laurenz :
> Patrick B wrote:
> > I am running a background task on my DB, which will copy data from 
> tableA
> to tableB. For
> > that, I'm writing a PL/PGSQL function which basically needs to do the
> following:
> >
> >
> > 1.    Select the data from tableA
> > 2.    The limit will be put when calling the function
> > 3.    insert the selected data on Step 1 onto new table
[...]
> >               FOR row IN EXECUTE '
> >                           SELECT
> >                                   id,
> >                                   path,
> >                                   name,
> >                                   name_last,
> >                                   created_at
> >                           FROM
> >                                   tablea
> >                           WHERE
> >                                   ready = true
> >                           ORDER BY 1 LIMIT ' || rows || ' OFFSET ' ||
> rows || ''
> 
> '... LIMIT ' || p_limit || ' OFFSET ' || p_offset
> 
> >               LOOP
> 
> num_rows := num_rows + 1;
>
> >               INSERT INTO tableB (id,path,name,name_last,created_at)
> >               VALUES (row.id,row.path,row.name,row.
> name_last,row.created_at);
> >
> >               END LOOP;
[...]
> 
> There are two problems with this approach:
> 
> 1. It will do the wrong thing if rows are added or deleted in "tablea"
> while
>    you process it.
> 
> 
> 
> There will be actually records being inserted in tablea while processing the
> migration Any ideas here?

Is id monotonically increasing? You might be able to use that, as Albe
suggests:

> The solution is to avoid OFFSET and to use "keyset pagination":
> http://use-the-index-luke.com/no-offset

But it works only if rows cannot become ready after their id range has
already been processed. Otherwise you will miss them.

> I can add another column in tablea, like example: row_migrated boolean --> if
> that helps

Yes that's probably the best way. Instead of using an additional column
you could also make ready tristate: New -> ready_for_migration -> migrated.

> 2. Queries with hight OFFSET values have bad performance.
> 
> 
> No problem. The plan is to perform 2k rows at once, which is not much.

Are rows deleted from tablea after they are migrated? Otherwise you will
have a problem:

select ... limit 2000 offset 1234000

will have to retrieve 1236000 rows and then discard 1234000 of them.

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: Digital signature


Re: [GENERAL] Connection options

2017-06-15 Thread David G. Johnston
On Thu, Jun 15, 2017 at 3:16 PM, Tom Lane  wrote:

> Igor Korot  writes:
> > On Thu, Jun 15, 2017 at 4:46 PM, David G. Johnston
> >  wrote:
> >> The options you pass from the client via the "options" attribute are
> >> interpreted by *the server* as command-line options.  They are not
> options
> >> that control libpq itself.
>
> > Can you give an example or try to explain it?
> > What do you mean by "interpreted by the server as command-line options"?
>
> See the switches described in
>
> https://www.postgresql.org/docs/current/static/app-postgres.html
>
> The point of the libpq "options" argument is that you can pass server
> settings using that same switch syntax, for settings you'd be allowed to
> change at runtime.  This is obsolete legacy stuff; all of the available
> options can be set with SET commands, and that's the preferred way to do
> it now.
>

​Centralization of client-side logic in self-contained scripts is a thing I
try to do.  The recent changes to psql -c/-f in 9.6 seem to now provide me
a viable alternative to passing the desired startup search_path via
options.  Prior to 9.6 using a custom PSQLRC and restricting myself to -f
and stdin would get me close; but -c ignoring (without the ability to
override) the PSQLRC sourcing made it imperfect.

I didn't learn that pg_bouncer shared this belief (out of necessity) until
later though.  And while I'm inclined to agree I'd suggest we actually make
such a declaration formal so people reading the docs and considering adding
such an argument to their custom psql aliases can instead decide to use
something different.  I'm not sure if I'd bleep a deprecation warning from
psql though - if so, adding a silencer switch would be courteous.

David J.


Re: [GENERAL] plpgsql function with offset - Postgres 9.1

2017-06-15 Thread David G. Johnston
On Thu, Jun 15, 2017 at 3:49 PM, Patrick B  wrote:

> 2017-06-16 10:35 GMT+12:00 David G. Johnston :
>
>> On Thu, Jun 15, 2017 at 3:19 PM, Patrick B 
>> wrote:
>>
>>> 2017-05-29 19:27 GMT+12:00 Albe Laurenz :
>>>
 Patrick B wrote:
 > I am running a background task on my DB, which will copy data from
 tableA to tableB. For
 > that, I'm writing a PL/PGSQL function which basically needs to do the
 following:
 >
>>>
>>>
>> ​https://www.postgresql.org/message-id/CY1PR18MB0490632A9A73
>> E64973F66D6BAFCE0%40CY1PR18MB0490.namprd18.prod.outlook.com
>>
>> My problem is when using LIMIT to select the data I get none. I assume
> because the first 3000 rows (i'm using ORDER BY 1) are not candidates for
> the migration.
>

Your where clause should only allow candidates to be selected.  If it does
then your theory is wrong.

David J.​


Re: [GENERAL] plpgsql function with offset - Postgres 9.1

2017-06-15 Thread Patrick B
2017-06-16 10:35 GMT+12:00 David G. Johnston :

> On Thu, Jun 15, 2017 at 3:19 PM, Patrick B 
> wrote:
>
>> 2017-05-29 19:27 GMT+12:00 Albe Laurenz :
>>
>>> Patrick B wrote:
>>> > I am running a background task on my DB, which will copy data from
>>> tableA to tableB. For
>>> > that, I'm writing a PL/PGSQL function which basically needs to do the
>>> following:
>>> >
>>
>>
> ​https://www.postgresql.org/message-id/CY1PR18MB0490632A9A73E64973F66
> D6BAFCE0%40CY1PR18MB0490.namprd18.prod.outlook.com
>
> David J​
>


My problem is when using LIMIT to select the data I get none. I assume
because the first 3000 rows (i'm using ORDER BY 1) are not candidates for
the migration.


Re: [GENERAL] plpgsql function with offset - Postgres 9.1

2017-06-15 Thread David G. Johnston
On Thu, Jun 15, 2017 at 3:19 PM, Patrick B  wrote:

> 2017-05-29 19:27 GMT+12:00 Albe Laurenz :
>
>> Patrick B wrote:
>> > I am running a background task on my DB, which will copy data from
>> tableA to tableB. For
>> > that, I'm writing a PL/PGSQL function which basically needs to do the
>> following:
>> >
>
>
​
https://www.postgresql.org/message-id/CY1PR18MB0490632A9A73E64973F66D6BAFCE0%40CY1PR18MB0490.namprd18.prod.outlook.com

David J​


Re: [GENERAL] plpgsql function with offset - Postgres 9.1

2017-06-15 Thread Patrick B
2017-05-29 19:27 GMT+12:00 Albe Laurenz :

> Patrick B wrote:
> > I am running a background task on my DB, which will copy data from
> tableA to tableB. For
> > that, I'm writing a PL/PGSQL function which basically needs to do the
> following:
> >
> >
> > 1.Select the data from tableA
> > 2.The limit will be put when calling the function
> > 3.insert the selected data on Step 1 onto new table
> >
> > Question:
> >
> > * When I stop it and start it again, how can the query "know" that
> it has already
> > processed some rows so it won't do it twice on the same rows? If it
> stopped on row number
> > 100, I need it to continue on row number 101, for example.
> >
> > * How can I ask the function to return the number of processed rows?
> >
> >
> > I can add a column on TableB if needed, but not on tableA.
> >
> > This is what I've done so far:
>
> >   CREATE or REPLACE FUNCTION data_copy(rows integer)
> >   RETURNS SETOF bigint AS $$
>
> CREATE or REPLACE FUNCTION data_copy(p_limit integer, p_offset integer)
> RETURNS integer;
>
> >   declare
> >   row record;
> >   offset_num integer;
>
> num_rows integer := 0;
>
> >   BEGIN
> >   FOR row IN EXECUTE '
> >   SELECT
> >   id,
> >   path,
> >   name,
> >   name_last,
> >   created_at
> >   FROM
> >   tablea
> >   WHERE
> >   ready = true
> >   ORDER BY 1 LIMIT ' || rows || ' OFFSET ' ||
> rows || ''
>
> '... LIMIT ' || p_limit || ' OFFSET ' || p_offset
>
> >   LOOP
>
> num_rows := num_rows + 1;
>
> >   INSERT INTO tableB (id,path,name,name_last,created_at)
> >   VALUES (row.id,row.path,row.name,row.
> name_last,row.created_at);
> >
> >   END LOOP;
>
> RETURN num_rows;
>
> >   END
> >   $$ language 'plpgsql';
>
> There are two problems with this approach:
>
> 1. It will do the wrong thing if rows are added or deleted in "tablea"
> while
>you process it.
>


There will be actually records being inserted in tablea while processing
the migration Any ideas here?

I can add another column in tablea, like example: row_migrated boolean -->
if that helps


>
> 2. Queries with hight OFFSET values have bad performance.
>

No problem. The plan is to perform 2k rows at once, which is not much.



>
> The solution is to avoid OFFSET and to use "keyset pagination":
> http://use-the-index-luke.com/no-offset
>
>
>

Thanks
Patrick


Re: [GENERAL] Connection options

2017-06-15 Thread Tom Lane
Igor Korot  writes:
> On Thu, Jun 15, 2017 at 4:46 PM, David G. Johnston
>  wrote:
>> The options you pass from the client via the "options" attribute are
>> interpreted by *the server* as command-line options.  They are not options
>> that control libpq itself.

> Can you give an example or try to explain it?
> What do you mean by "interpreted by the server as command-line options"?

See the switches described in

https://www.postgresql.org/docs/current/static/app-postgres.html

The point of the libpq "options" argument is that you can pass server
settings using that same switch syntax, for settings you'd be allowed to
change at runtime.  This is obsolete legacy stuff; all of the available
options can be set with SET commands, and that's the preferred way to do
it now.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection options

2017-06-15 Thread David G. Johnston
On Thu, Jun 15, 2017 at 2:41 PM, Igor Korot  wrote:

>
> On Thu, Jun 15, 2017 at 4:46 PM, David G. Johnston
>  wrote:
> > On Thu, Jun 15, 2017 at 1:23 PM, Igor Korot  wrote:
> >>
> >> And could you clarify on the first part of this?
> >> From the quote I poste it sounds like this is available only in
> >> command-line
> >> tools. And if someone will use it inside the program it will be ignored.
> >
> >
> > The options you pass from the client via the "options" attribute are
> > interpreted by *the server* as command-line options.  They are not
> options
> > that control libpq itself.
>
> Can you give an example or try to explain it?
> What do you mean by "interpreted by the server as command-line options"?
>
> Does this mean I can just ignore this parameter inside my C{++} program?
> Or I can set some options and pass it to the server thru this parameter?
>

​Reading this should help:

​https://www.postgresql.org/docs/current/static/config-setting.html

Note especially 19.1.4 - Parameter Interaction via the Shell

For example:

postgres -c 'search_path=app,public'

is roughly equivalent to:

psql "options='-c search_path=app,public'"

which can written directly in libpq as (not a C programmer here so I'm
going for pseudo code):

keyarray=[options]
valarray=["-c search_path=app,public"]
PQconnectdbParams(keyarray, valarray, 0)

Equivalent in that the session that ends up being started will already have
a search_path value of "app,public" (it does depend on where else
search_path defaults are setup).

Note that you can get much the same behavior by doing:

$ psql
> SET search_path TO app, public;

In fact, if your are routing your connections through pg_bouncer making use
of "options" (-c search_path=app,public) won't work and you'll want to
either use "SET search_path TO app, public" or possibly one of the "ALTER
DATABASE/ ALTER USER" commands to setup your connection defaults.

In short, you can choose to forgo using session startup options which are
passed to the server and treated by it as command-line (shell) options.  I
suspect most users do.

32.1.2 lists all of the words that can appear in the "keyarray" above -
these configure the client so it can talk to the server.  If the word
chosen is "options", however, the valid values come from what the server
recognizes and instructs the server to start the newly connected session
with the GUC/configuration variables set to the specified values.

David J.


Re: [GENERAL] Connection options

2017-06-15 Thread Adrian Klaver

On 06/15/2017 02:41 PM, Igor Korot wrote:

Hi, David,

On Thu, Jun 15, 2017 at 4:46 PM, David G. Johnston
 wrote:

On Thu, Jun 15, 2017 at 1:23 PM, Igor Korot  wrote:


And could you clarify on the first part of this?
 From the quote I poste it sounds like this is available only in
command-line
tools. And if someone will use it inside the program it will be ignored.



The options you pass from the client via the "options" attribute are
interpreted by *the server* as command-line options.  They are not options
that control libpq itself.


Can you give an example or try to explain it?
What do you mean by "interpreted by the server as command-line options"?

Does this mean I can just ignore this parameter inside my C{++} program?
Or I can set some options and pass it to the server thru this parameter?



I can kinda see the confusion here but I'm not sure how to write it more
clearly without being excessively verbose.  I haven't seen this particular
confusion before so I'd say the wording is reasonable and the mailing lists
are doing their job of providing a forum for providing clarity.


Well for someone who is just started with PostgreSQL and C interface it is
confusing.


Maybe this will help:

https://www.postgresql.org/docs/9.6/static/libpq-example.html



Thank you.



David J.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection options

2017-06-15 Thread Igor Korot
Hi, David,

On Thu, Jun 15, 2017 at 4:46 PM, David G. Johnston
 wrote:
> On Thu, Jun 15, 2017 at 1:23 PM, Igor Korot  wrote:
>>
>> And could you clarify on the first part of this?
>> From the quote I poste it sounds like this is available only in
>> command-line
>> tools. And if someone will use it inside the program it will be ignored.
>
>
> The options you pass from the client via the "options" attribute are
> interpreted by *the server* as command-line options.  They are not options
> that control libpq itself.

Can you give an example or try to explain it?
What do you mean by "interpreted by the server as command-line options"?

Does this mean I can just ignore this parameter inside my C{++} program?
Or I can set some options and pass it to the server thru this parameter?

>
> I can kinda see the confusion here but I'm not sure how to write it more
> clearly without being excessively verbose.  I haven't seen this particular
> confusion before so I'd say the wording is reasonable and the mailing lists
> are doing their job of providing a forum for providing clarity.

Well for someone who is just started with PostgreSQL and C interface it is
confusing.

Thank you.

>
> David J.
>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection options

2017-06-15 Thread David G. Johnston
On Thu, Jun 15, 2017 at 1:23 PM, Igor Korot  wrote:

> And could you clarify on the first part of this?
> From the quote I poste it sounds like this is available only in
> command-line
> tools. And if someone will use it inside the program it will be ignored.
>

​The options you pass from the client via the "options" attribute are
interpreted by *the server* as command-line options.  They are not options
that control libpq itself.

I can kinda see the confusion here but I'm not sure how to write it more
clearly without being excessively verbose.  I haven't seen this particular
confusion before so I'd say the wording is reasonable and the mailing lists
are doing their job of providing a forum for providing clarity.

David J.


Re: [GENERAL] Connection options

2017-06-15 Thread Adrian Klaver

On 06/15/2017 01:23 PM, Igor Korot wrote:

Hi, Adrian,

On Thu, Jun 15, 2017 at 4:13 PM, Adrian Klaver
 wrote:

On 06/15/2017 12:02 PM, Igor Korot wrote:


Hi, again,

On Thu, Jun 15, 2017 at 2:59 PM, Igor Korot  wrote:


   Hi, ALL,
I'm looking at the documentation at
www.postgresql.org/docs/current/static/libpq-connect.html
section 32.1.2.

The part for "option" reads:

[quote]
Specifies command-line options to send to the server...
[/quote]

Does this mean that if I'm writing a C/C++ program, I shouldn't care
about that?
Or there are legitimate set of options I can set to pass to
PQconnectDB()?



In addition:

Where can I get a list of client encoding I can pass on the connection
to the server?



https://www.postgresql.org/docs/9.6/static/multibyte.html


Is it possible to update the libpq-connect page with this link?


Probably something you could bring up on:

https://www.postgresql.org/list/pgsql-docs/



And could you clarify on the first part of this?

From the quote I poste it sounds like this is available only in command-line

tools. And if someone will use it inside the program it will be ignored.


I use tools that use libpq, but I do not use it directly so I do not 
have a definitive answer. I do see this:


"Several libpq functions parse a user-specified string to obtain 
connection parameters. There are two accepted formats for these strings: 
plain keyword = value strings and RFC 3986 URIs."


at the head of the section(32.1.1. Connection Strings) from which you 
quoted. What those functions are I have no idea.





Thank you.





Thank you.



Thank you.







--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection options

2017-06-15 Thread Igor Korot
Hi, Adrian,

On Thu, Jun 15, 2017 at 4:13 PM, Adrian Klaver
 wrote:
> On 06/15/2017 12:02 PM, Igor Korot wrote:
>>
>> Hi, again,
>>
>> On Thu, Jun 15, 2017 at 2:59 PM, Igor Korot  wrote:
>>>
>>>   Hi, ALL,
>>> I'm looking at the documentation at
>>> www.postgresql.org/docs/current/static/libpq-connect.html
>>> section 32.1.2.
>>>
>>> The part for "option" reads:
>>>
>>> [quote]
>>> Specifies command-line options to send to the server...
>>> [/quote]
>>>
>>> Does this mean that if I'm writing a C/C++ program, I shouldn't care
>>> about that?
>>> Or there are legitimate set of options I can set to pass to
>>> PQconnectDB()?
>>
>>
>> In addition:
>>
>> Where can I get a list of client encoding I can pass on the connection
>> to the server?
>
>
> https://www.postgresql.org/docs/9.6/static/multibyte.html

Is it possible to update the libpq-connect page with this link?

And could you clarify on the first part of this?
>From the quote I poste it sounds like this is available only in command-line
tools. And if someone will use it inside the program it will be ignored.

Thank you.

>
>>
>> Thank you.
>>
>>>
>>> Thank you.
>>
>>
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Freeing memory in native extension in case of error

2017-06-15 Thread Gabriel Furstenheim Milerud
Great, thanks

On 15 June 2017 at 22:07, Tom Lane  wrote:
> Gabriel Furstenheim Milerud  writes:
>> I've written an extension in C to sum jsonb. For that I use the
>> jsonbiterator defined in
>> https://github.com/postgres/postgres/blob/master/src/backend/utils/adt/jsonb_util.c#L743
>
>> In the comments of JsonbIteratorNext it states: 'Callers in such a
>> scenario, that are particularly sensitive to leaking memory in a
>> long-lived context may walk the ancestral tree from the final iterator
>> we left them with to its oldest ancestor, pfree()ing as they go'
>
>> In the extension that I've written, I sometimes stop in the middle of
>> an iteration, but only to raise an error (if the json contains a not
>> numeric value), with 'ereport(ERROR, ...'
>
>> Do I have to free the memory in that case?
>
> I wouldn't bother, assuming that it's all in the short-lived query
> context (the one that's current when your function is called).
> Error cleanup will flush that context automatically.
>
> Really, for memory in that context, it's unlikely to matter even
> if you are sloppy and leak memory on normal non-error return.
> That context will get reset after each row processed by the
> query anyway.  It's only if you're trying to cache data across
> rows, or in even longer-lived ways, that you really need to be
> careful.
>
> regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection options

2017-06-15 Thread Adrian Klaver

On 06/15/2017 12:02 PM, Igor Korot wrote:

Hi, again,

On Thu, Jun 15, 2017 at 2:59 PM, Igor Korot  wrote:

  Hi, ALL,
I'm looking at the documentation at
www.postgresql.org/docs/current/static/libpq-connect.html
section 32.1.2.

The part for "option" reads:

[quote]
Specifies command-line options to send to the server...
[/quote]

Does this mean that if I'm writing a C/C++ program, I shouldn't care about that?
Or there are legitimate set of options I can set to pass to PQconnectDB()?


In addition:

Where can I get a list of client encoding I can pass on the connection
to the server?


https://www.postgresql.org/docs/9.6/static/multibyte.html



Thank you.



Thank you.






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Freeing memory in native extension in case of error

2017-06-15 Thread Tom Lane
Gabriel Furstenheim Milerud  writes:
> I've written an extension in C to sum jsonb. For that I use the
> jsonbiterator defined in
> https://github.com/postgres/postgres/blob/master/src/backend/utils/adt/jsonb_util.c#L743

> In the comments of JsonbIteratorNext it states: 'Callers in such a
> scenario, that are particularly sensitive to leaking memory in a
> long-lived context may walk the ancestral tree from the final iterator
> we left them with to its oldest ancestor, pfree()ing as they go'

> In the extension that I've written, I sometimes stop in the middle of
> an iteration, but only to raise an error (if the json contains a not
> numeric value), with 'ereport(ERROR, ...'

> Do I have to free the memory in that case?

I wouldn't bother, assuming that it's all in the short-lived query
context (the one that's current when your function is called).
Error cleanup will flush that context automatically.

Really, for memory in that context, it's unlikely to matter even
if you are sloppy and leak memory on normal non-error return.
That context will get reset after each row processed by the
query anyway.  It's only if you're trying to cache data across
rows, or in even longer-lived ways, that you really need to be
careful.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Freeing memory in native extension in case of error

2017-06-15 Thread Gabriel Furstenheim Milerud
Hi,
I've written an extension in C to sum jsonb. For that I use the
jsonbiterator defined in
https://github.com/postgres/postgres/blob/master/src/backend/utils/adt/jsonb_util.c#L743

In the comments of JsonbIteratorNext it states: 'Callers in such a
scenario, that are particularly sensitive to leaking memory in a
long-lived context may walk the ancestral tree from the final iterator
we left them with to its oldest ancestor, pfree()ing as they go'

In the extension that I've written, I sometimes stop in the middle of
an iteration, but only to raise an error (if the json contains a not
numeric value), with 'ereport(ERROR, ...'

Do I have to free the memory in that case? According to this guide to
write extensions
(http://big-elephants.com/2015-10/writing-postgres-extensions-part-i/)
it wouldn't be necessary because 'Memory allocated by palloc will be
freed automatically at the end of each transaction'. So if I
understand it correctly I wouldn't have to.

Thanks


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection options

2017-06-15 Thread Igor Korot
Hi, again,

On Thu, Jun 15, 2017 at 2:59 PM, Igor Korot  wrote:
>  Hi, ALL,
> I'm looking at the documentation at
> www.postgresql.org/docs/current/static/libpq-connect.html
> section 32.1.2.
>
> The part for "option" reads:
>
> [quote]
> Specifies command-line options to send to the server...
> [/quote]
>
> Does this mean that if I'm writing a C/C++ program, I shouldn't care about 
> that?
> Or there are legitimate set of options I can set to pass to PQconnectDB()?

In addition:

Where can I get a list of client encoding I can pass on the connection
to the server?

Thank you.

>
> Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Connection options

2017-06-15 Thread Igor Korot
 Hi, ALL,
I'm looking at the documentation at
www.postgresql.org/docs/current/static/libpq-connect.html
section 32.1.2.

The part for "option" reads:

[quote]
Specifies command-line options to send to the server...
[/quote]

Does this mean that if I'm writing a C/C++ program, I shouldn't care about that?
Or there are legitimate set of options I can set to pass to PQconnectDB()?

Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger problems/questions

2017-06-15 Thread Jim Fulton
On Thu, Jun 15, 2017 at 1:56 PM, Tom Lane  wrote:

> Jim Fulton  writes:
> > I have an object database that's mirrored to a table with data in a JSONB
> > column.  Data are organized into "communities".  Community ids aren't
> > stored directly in content but can be found by recursively following
> > __parent__ properties. I want to be able to index content records on
> their
> > community ids.
>
> > (I originally tried to index functions that got ids, but apparently lying
> > about immutability is a bad idea and I suffered the consequences. :-])
>
> > I tried creating a trigger to populate a community_zoid property with a
> > community id when a record is inserted or updated.  The trigger calls a
> > recursive functions to get the community id.
> > ...
> > This scheme succeeds most of the time, but occasionally, it fails.
>
> Since your original idea failed, I suppose that the parent relationships
> are changeable?


Good question.

A few kinds of objects can, rarely, move in the hierarchy, and, they never
move between communities, so their community id never changes.

IDK WTF my indexing attempt.  I could build the index, then add an object
to the tree and it wouldn't be indexed.  This was in a staging database
where there were no other changes.


> What mechanism have you got in place to propagate a
> relationship change back down to the child records?
>

This is a non-issue, at least WRT community ids.  If I were, for example,
to index paths, it would be an issue for some objects, but I'm not at that
point yet.


> Also, this looks to have a race condition: if you search for a record's
> community id at about the same time that someone else is changing the
> parent linkage, you may get the old answer, but by the time you commit the
> record update that answer may be obsolete.  This is a problem because even
> if you had another trigger that was trying (in the someone else's session)
> to propagate new community ids back to affected records, it wouldn't think
> that the record you're working on needs a change, because it would also
> see the old version of that record.
>
> Solutions to the race problem usually involve either SELECT FOR UPDATE
> to lock rows involved in identifying the target record's community ID,
> or use of SERIALIZABLE to cause the whole transaction to fail if its
> results might be inconsistent.  Either one will add some complexity
> to your application code.
>

There's a global lock around all of the updates to the table.  (This isn't
as unreasonable as it sounds :), but anyway, that's outside the scope of
this discussion.)

Even if there was some kind of race, I'd still get a community id set, it
might be wrong, but it would be set.

regards, tom lane
>

Thanks.

Jim

-- 
Jim Fulton
http://jimfulton.info


Re: [GENERAL] Trigger problems/questions

2017-06-15 Thread Tom Lane
Jim Fulton  writes:
> I have an object database that's mirrored to a table with data in a JSONB
> column.  Data are organized into "communities".  Community ids aren't
> stored directly in content but can be found by recursively following
> __parent__ properties. I want to be able to index content records on their
> community ids.

> (I originally tried to index functions that got ids, but apparently lying
> about immutability is a bad idea and I suffered the consequences. :-])

> I tried creating a trigger to populate a community_zoid property with a
> community id when a record is inserted or updated.  The trigger calls a
> recursive functions to get the community id.
> ...
> This scheme succeeds most of the time, but occasionally, it fails.

Since your original idea failed, I suppose that the parent relationships
are changeable?  What mechanism have you got in place to propagate a
relationship change back down to the child records?

Also, this looks to have a race condition: if you search for a record's
community id at about the same time that someone else is changing the
parent linkage, you may get the old answer, but by the time you commit the
record update that answer may be obsolete.  This is a problem because even
if you had another trigger that was trying (in the someone else's session)
to propagate new community ids back to affected records, it wouldn't think
that the record you're working on needs a change, because it would also
see the old version of that record.

Solutions to the race problem usually involve either SELECT FOR UPDATE
to lock rows involved in identifying the target record's community ID,
or use of SERIALIZABLE to cause the whole transaction to fail if its
results might be inconsistent.  Either one will add some complexity
to your application code.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger problems/questions

2017-06-15 Thread Jim Fulton
On Thu, Jun 15, 2017 at 1:44 PM, Adrian Klaver 
wrote:

> On 06/15/2017 10:23 AM, Jim Fulton wrote:
>

...


> I do not pretend to fully understand what the the triggers/functions are
> really doing,

but I did notice this:
>
> create or replace function populate_community_zoid_triggerf()
>
> ...
>
> new_zoid := NEW.state ->> 'community_zoid';
> zoid := find_community_zoid(
> NEW.zoid, NEW.class_name, NEW.state)::text;
> if zoid is null then
> if new_zoid is not null then
> NEW.state := NEW.state - 'community_zoid';
> end if;
> else
> if new_zoid is null or zoid != new_zoid then
> NEW.state :=
> NEW.state || ('{"community_zoid": ' || zoid || '}')::jsonb;
> end if;
> end if;
>
> ...
>
>
> Not sure what happens if zoid is null and new_zoid is null also?
>

Then then no change is needed and nothing happens, which is correct.


> Or if it matters?


Nope.

Thanks for looking.

Jim

-- 
Jim Fulton
http://jimfulton.info


Re: [GENERAL] Trigger problems/questions

2017-06-15 Thread Adrian Klaver

On 06/15/2017 10:23 AM, Jim Fulton wrote:
I have an object database that's mirrored to a table with data in a 
JSONB column.  Data are organized into "communities".  Community ids 
aren't stored directly in content but can be found by recursively 
following __parent__ properties. I want to be able to index content 
records on their community ids.


(I originally tried to index functions that got ids, but apparently 
lying about immutability is a bad idea and I suffered the consequences. :-])


I tried creating a trigger to populate a community_zoid property with a 
community id when a record is inserted or updated.  The trigger calls a 
recursive functions to get the community id.


Trigger: 
https://github.com/karlproject/karl/blob/master/karl/scripts/pgevolve.py#L261 



Trigger procedure: 
https://github.com/karlproject/karl/blob/master/karl/scripts/pgevolve.py#L236 



Function to find a community id: 
https://github.com/karlproject/karl/blob/master/karl/scripts/pgevolve.py#L209 



This scheme succeeds most of the time, but occasionally, it fails.

I can find records where it has failed with a query like:

   select zoid
   from newt
   where find_community_zoid(zoid, class_name, state) is not null
 and not state ? 'community_zoid';

If I update the records where it has failed:

   update newt set class_name=class_name
   where find_community_zoid(zoid, class_name, state) is not null
  and not state ? 'community_zoid';

Then retry the query above, I get 0 rows back. This would seem to 
indicate that the trigger is logically correct.


Questions:

  * Am I doing it wrong?  For example, is there some limitation on
trigger procedures that I'm violating?
  * If a trigger procedure errors, is the transaction aborted? (I would
hope so.)
  * Should I expect triggers to be reliable and rely on them for
database integrity?
  * Any suggestions on how to debug this?


I do not pretend to fully understand what the the triggers/functions are 
really doing, but I did notice this:


create or replace function populate_community_zoid_triggerf()

...

new_zoid := NEW.state ->> 'community_zoid';
zoid := find_community_zoid(
NEW.zoid, NEW.class_name, NEW.state)::text;
if zoid is null then
if new_zoid is not null then
NEW.state := NEW.state - 'community_zoid';
end if;
else
if new_zoid is null or zoid != new_zoid then
NEW.state :=
NEW.state || ('{"community_zoid": ' || zoid || '}')::jsonb;
end if;
end if;

...


Not sure what happens if zoid is null and new_zoid is null also?
Or if it matters?




Jim

--
Jim Fulton
http://jimfulton.info



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CREATE FOREIGN TABLE difficulties

2017-06-15 Thread Paul Lavoie
The ‘does not actually create’ the table was the piece I was missing. I wasn’t 
sure either way from reading the documentation for the command, but makes sense 
from a perspective of trying to keep the databases independent.

One of the reasons I went down this path was the hopes to not need to 
manipulate the table definitions in each of the databases, rather make the 
definition once and let it propagate to the children as necessary, courtesy of 
the ‘INHERITS’.  I’ll adjust my plans accordingly.

Thanks for the reply!

- Paul

> On Jun 15, 2017, at 1:27 PM, Adrian Klaver  wrote:
> 
> On 06/15/2017 09:49 AM, Paul Lavoie wrote:
>> Hi,
>> Long time user, new poster…
>> For assorted reasons I’ve decided to shard a database across multiple 
>> instances of postgresql running on the same machine. I’ve set up a lot of 
>> children servers with a ‘fdw’ user to work with the foreign data wrapper and 
>> created the child database along with a schema, and then on the main 
>> database go thru the process of:
>> CREATE SCHEMA myschema;
>> CREATE TABLE mytable (mycol TEXT);
>> CREATE EXTENSION postgres_fdw;
>> CREATE SERVER db001 FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 
>> ‘localhost’, port ‘8001’, dbname = ‘db001’);
>> CREATE USER MAPPING FOR myuser SERVER db001 OPTIONS (user ‘fdw’, password 
>> ‘XXX’);
>> CREATE FOREIGN TABLE myschema.mytable () INHERITS (mytable) SERVER db001 
>> OPTIONS (schema_name ‘myschema’, table_name ‘mytable’);
>> Attempts to SELECT from myschema.mytable then fail with a “relation 
>> “myschema.mytable” does not exist” error, and going into the child database 
>> shows no signs of any tables whatsoever.
> 
> I am assuming you are doing the above on the parent database.
> CREATE FOREIGN TABLE does not actually create the table on the remote(child), 
> it has to exist there already. It creates the table on parent and links it to 
> the table on the remote(child).:
> 
> https://www.postgresql.org/docs/9.6/static/sql-createforeigntable.html
> 
> You will also want to pay attention to:
> 
> "If a schema name is given (for example, CREATE FOREIGN TABLE 
> myschema.mytable ...) then the table is created in the specified schema. 
> Otherwise it is created in the current schema. The name of the foreign table 
> must be distinct from the name of any other foreign table, table, sequence, 
> index, view, or materialized view in the same schema."
> 
> 
> from the above link.
> 
>> This is under Postgresql 9.6.3, with the software being tested on servers 
>> running MacOS, NetBSD/amd64, & Solaris/x86_64 (various combinations). I’ve 
>> tried removing the schema qualifications, redoing the host as properly 
>> remote rather than localhost, removing the port number, all without any 
>> signs of success. Oddly, the inverse of IMPORT FOREIGN SCHEMA appears to 
>> work.
>> I’m particularly curious as to how one would troubleshoot this scenario. I’m 
>> somewhat surprised at the CREATE FOREIGN TABLE returning success when it 
>> doesn’t appear to have done all the work - the Postgreql instance must be 
>> present, but it doesn’t complain if the database, never mind the schema, 
>> doesn’t exist.
>> If this turns out to be a bug, I’ll happily move to the bug mailing list to 
>> discuss further. But under the possibility I’m missing the obvious, I’d 
>> thought I’d try here first.
>> I’m going to go try the 10.0 beta now…
>> Thanks!
>> - Paul
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [DOCS] [GENERAL] pg_upgrade --link on Windows

2017-06-15 Thread Bruce Momjian
On Tue, Jun 13, 2017 at 04:07:48PM -0400, Bruce Momjian wrote:
> On Fri, Jun  9, 2017 at 07:55:55AM -0700, Adrian Klaver wrote:
> I apologize for not being smarter on this thread.  When I helped with
> the Windows port, I was told Windows didn't have hard links for use by
> tablespace directories, so I got it into my head that Windows didn't
> have hard links.  Therefore, when I was writing the docs, I called them
> junction points.
> 
> Looking back to Postgres 9.0 where pg_upgrade was added to the tree, I
> see that the code even at that time used hard links on Windows.  I have
> created the attached patch which I will apply to all current Postgres
> versions to fix this error.
> 
> Thanks for the report and the research.  :-)

Patch applied all the way back to 9.3, where the junction point mention
first appeared.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CREATE FOREIGN TABLE difficulties

2017-06-15 Thread Adrian Klaver

On 06/15/2017 09:49 AM, Paul Lavoie wrote:

Hi,

Long time user, new poster…

For assorted reasons I’ve decided to shard a database across multiple instances 
of postgresql running on the same machine. I’ve set up a lot of children 
servers with a ‘fdw’ user to work with the foreign data wrapper and created the 
child database along with a schema, and then on the main database go thru the 
process of:

CREATE SCHEMA myschema;
CREATE TABLE mytable (mycol TEXT);

CREATE EXTENSION postgres_fdw;
CREATE SERVER db001 FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 
‘localhost’, port ‘8001’, dbname = ‘db001’);
CREATE USER MAPPING FOR myuser SERVER db001 OPTIONS (user ‘fdw’, password 
‘XXX’);
CREATE FOREIGN TABLE myschema.mytable () INHERITS (mytable) SERVER db001 
OPTIONS (schema_name ‘myschema’, table_name ‘mytable’);

Attempts to SELECT from myschema.mytable then fail with a “relation 
“myschema.mytable” does not exist” error, and going into the child database 
shows no signs of any tables whatsoever.


I am assuming you are doing the above on the parent database.
CREATE FOREIGN TABLE does not actually create the table on the 
remote(child), it has to exist there already. It creates the table on 
parent and links it to the table on the remote(child).:


https://www.postgresql.org/docs/9.6/static/sql-createforeigntable.html

You will also want to pay attention to:

"If a schema name is given (for example, CREATE FOREIGN TABLE 
myschema.mytable ...) then the table is created in the specified schema. 
Otherwise it is created in the current schema. The name of the foreign 
table must be distinct from the name of any other foreign table, table, 
sequence, index, view, or materialized view in the same schema."



from the above link.



This is under Postgresql 9.6.3, with the software being tested on servers running 
MacOS, NetBSD/amd64, & Solaris/x86_64 (various combinations). I’ve tried 
removing the schema qualifications, redoing the host as properly remote rather than 
localhost, removing the port number, all without any signs of success. Oddly, the 
inverse of IMPORT FOREIGN SCHEMA appears to work.

I’m particularly curious as to how one would troubleshoot this scenario. I’m 
somewhat surprised at the CREATE FOREIGN TABLE returning success when it 
doesn’t appear to have done all the work - the Postgreql instance must be 
present, but it doesn’t complain if the database, never mind the schema, 
doesn’t exist.

If this turns out to be a bug, I’ll happily move to the bug mailing list to 
discuss further. But under the possibility I’m missing the obvious, I’d thought 
I’d try here first.

I’m going to go try the 10.0 beta now…

Thanks!

- Paul




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Trigger problems/questions

2017-06-15 Thread Jim Fulton
I have an object database that's mirrored to a table with data in a JSONB
column.  Data are organized into "communities".  Community ids aren't
stored directly in content but can be found by recursively following
__parent__ properties. I want to be able to index content records on their
community ids.

(I originally tried to index functions that got ids, but apparently lying
about immutability is a bad idea and I suffered the consequences. :-])

I tried creating a trigger to populate a community_zoid property with a
community id when a record is inserted or updated.  The trigger calls a
recursive functions to get the community id.

Trigger: https://github.com/karlproject/karl/blob/master/
karl/scripts/pgevolve.py#L261

Trigger procedure: https://github.com/karlproject/karl/blob/master/
karl/scripts/pgevolve.py#L236

Function to find a community id: https://github.com/
karlproject/karl/blob/master/karl/scripts/pgevolve.py#L209

This scheme succeeds most of the time, but occasionally, it fails.

I can find records where it has failed with a query like:

  select zoid
  from newt
  where find_community_zoid(zoid, class_name, state) is not null
and not state ? 'community_zoid';

If I update the records where it has failed:

  update newt set class_name=class_name
  where find_community_zoid(zoid, class_name, state) is not null
 and not state ? 'community_zoid';

Then retry the query above, I get 0 rows back. This would seem to indicate
that the trigger is logically correct.

Questions:

   - Am I doing it wrong?  For example, is there some limitation on trigger
   procedures that I'm violating?
   - If a trigger procedure errors, is the transaction aborted? (I would
   hope so.)
   - Should I expect triggers to be reliable and rely on them for database
   integrity?
   - Any suggestions on how to debug this?

Jim

-- 
Jim Fulton
http://jimfulton.info


Re: [GENERAL] Index-only scan on GIN index for COUNT() queries

2017-06-15 Thread Alexander Kuzmenkov

On 15.06.2017 17:43, Seamus Abshere wrote:


Theoretically, could support be added for Index-only scans on GIN
indexes when only a COUNT() is requested?

Hi Seamus,

I am working on a patch that adds this possibility for GIN and any other 
indexes that support bitmap scans.
You can follow the development progress in this thread: 
https://www.postgresql.org/message-id/flat/ca192322-29fd-9e60-3766-fe5e69d1f9af%40postgrespro.ru#ca192322-29fd-9e60-3766-fe5e69d1f...@postgrespro.ru


--
Alexander Kuzmenkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] CREATE FOREIGN TABLE difficulties

2017-06-15 Thread Paul Lavoie
Hi,

Long time user, new poster…

For assorted reasons I’ve decided to shard a database across multiple instances 
of postgresql running on the same machine. I’ve set up a lot of children 
servers with a ‘fdw’ user to work with the foreign data wrapper and created the 
child database along with a schema, and then on the main database go thru the 
process of:

CREATE SCHEMA myschema;
CREATE TABLE mytable (mycol TEXT);

CREATE EXTENSION postgres_fdw;
CREATE SERVER db001 FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 
‘localhost’, port ‘8001’, dbname = ‘db001’);
CREATE USER MAPPING FOR myuser SERVER db001 OPTIONS (user ‘fdw’, password 
‘XXX’);
CREATE FOREIGN TABLE myschema.mytable () INHERITS (mytable) SERVER db001 
OPTIONS (schema_name ‘myschema’, table_name ‘mytable’);

Attempts to SELECT from myschema.mytable then fail with a “relation 
“myschema.mytable” does not exist” error, and going into the child database 
shows no signs of any tables whatsoever.

This is under Postgresql 9.6.3, with the software being tested on servers 
running MacOS, NetBSD/amd64, & Solaris/x86_64 (various combinations). I’ve 
tried removing the schema qualifications, redoing the host as properly remote 
rather than localhost, removing the port number, all without any signs of 
success. Oddly, the inverse of IMPORT FOREIGN SCHEMA appears to work.

I’m particularly curious as to how one would troubleshoot this scenario. I’m 
somewhat surprised at the CREATE FOREIGN TABLE returning success when it 
doesn’t appear to have done all the work - the Postgreql instance must be 
present, but it doesn’t complain if the database, never mind the schema, 
doesn’t exist.

If this turns out to be a bug, I’ll happily move to the bug mailing list to 
discuss further. But under the possibility I’m missing the obvious, I’d thought 
I’d try here first.

I’m going to go try the 10.0 beta now…

Thanks!

- Paul

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Index-only scan on GIN index for COUNT() queries

2017-06-15 Thread Seamus Abshere
hi,

We have a GIN index on jsonb_col. We always get Bitmap Index Scan +
Bitmap Heap Scan when we do things like

SELECT COUNT(*) FROM mytable WHERE jsonb_col ? 'key1'

Theoretically, could support be added for Index-only scans on GIN
indexes when only a COUNT() is requested?

Thanks,
Seamus

PS. Here is the real query analyze:
https://gist.github.com/seamusabshere/b9d72132361fa598f7a431fa1bcb120f

--
Seamus Abshere, SCEA
http://faraday.io
https://github.com/seamusabshere
http://linkedin.com/in/seamusabshere


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Repmgr + pgbouncer - Notification of master promotion to application level ...

2017-06-15 Thread Andreas Kretschmer



Am 15.06.2017 um 11:57 schrieb Martin Goodson:



The issues I think I would have with pgbouncer at the application 
level is ...


1) What if an application server is down when pgbouncer tries to 
update where the database IP is pointing to? When it is brought back 
into service could that create a split-brain scenario if it's still 
pointing to the old master? Since the only STONITH here is 'I've told 
you to talk to server X instead of server Y' what happens if somebody 
doesn't get that message and the old master is still up and about 
(e.g. the failover was caused by a transient error such as a power 
issue or network issue that the 'old master' was able to quickly 
recover from and come back online)? :)


yeah, that's problematic here.

2) We use a non-trivial number of application servers. Could that 
create 'failover lag' due to the amount of time it takes to notify all 
the pgbouncers to stop, change IP, and restart?


possible, yes.

3) Since we use a non-trivial number of application servers, the 
administrative overhead of pushing all user password changes up to 
each pgbouncer could be a bit of a pain (candidate for an automation 
tool like ansible, perhaps?)


use auth_query instead of auth_file.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Repmgr + pgbouncer - Notification of master promotion to application level ...

2017-06-15 Thread Martin Goodson

On 15/06/2017 05:27, Andreas Kretschmer wrote:



Am 15.06.2017 um 01:18 schrieb Martin Goodson:



I'm just wondering how people may have implemented this. Do people 
setup pgbouncer nodes on the database servers themselves, on 
application servers, in the middle tier between the application and 
database, and so forth, or some combination of the three? I can think 
of some advantages or drawbacks for each. Or do people find that 
repmgr works better with other tools to handle the promotion 
notification outside the database cluster?


Basically I'm new to this, and I'm wondering how folks have handled 
this issue. I'm basically looking for the community's wisdom :)




Usually we recommend to install pgbouncer on the app-servers.

If you have full control of the application you can try to integrate 
the logic into the application (provide a list of servers, the new 
pg10-version of libpg is working similar in this way:
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=274bb2b3857cc987cfa21d14775cae9b0dababa5 

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=721f7bd3cbccaf8c07cad2707826b83f84694832 


)

Regards, Andreas



Hello.

Unfortunately, no control of the application layer.

Very interesting feedback so far. Thanks, everyone!

The issues I think I would have with pgbouncer at the application level 
is ...


1) What if an application server is down when pgbouncer tries to update 
where the database IP is pointing to? When it is brought back into 
service could that create a split-brain scenario if it's still pointing 
to the old master? Since the only STONITH here is 'I've told you to talk 
to server X instead of server Y' what happens if somebody doesn't get 
that message and the old master is still up and about (e.g. the failover 
was caused by a transient error such as a power issue or network issue 
that the 'old master' was able to quickly recover from and come back 
online)? :)
2) We use a non-trivial number of application servers. Could that create 
'failover lag' due to the amount of time it takes to notify all the 
pgbouncers to stop, change IP, and restart?
3) Since we use a non-trivial number of application servers, the 
administrative overhead of pushing all user password changes up to each 
pgbouncer could be a bit of a pain (candidate for an automation tool 
like ansible, perhaps?)


How do people deal with that?

Regards,

Martin.

--
Martin Goodson

"Have you thought up some clever plan, Doctor?"
"Yes, Jamie, I believe I have."
"What're you going to do?"
"Bung a rock at it."


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Repmgr + pgbouncer - Notification of master promotion to application level ...

2017-06-15 Thread Andreas Kretschmer



Am 15.06.2017 um 08:26 schrieb Rory Campbell-Lange:

On 15/06/17, Andreas Kretschmer (andr...@a-kretschmer.de) wrote:

Am 15.06.2017 um 01:18 schrieb Martin Goodson:

...Do people setup pgbouncer nodes on the database servers
themselves, on application servers, in the middle tier between the
application and database, and so forth, or some combination of the
three?

Usually we recommend to install pgbouncer on the app-servers.

If you have full control of the application you can try to integrate the
logic into the application (provide a list of servers, the new pg10-version
of libpg is working similar in this way:
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=274bb2b3857cc987cfa21d14775cae9b0dababa5
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=721f7bd3cbccaf8c07cad2707826b83f84694832
)

Hi Andreas

The list of servers idea is a cool enhancement. However would pgbouncer
(or another client) be able to detect which of those servers were in slave
mode?


it is possible to detect which is the master. So, if you know the master 
and you have a list of all, you knows also the standby's ;-)




Otherwise, if there is a temporary glitch in communications with the
master, a client (such as pgbouncer) could move to try inserts on a
slave.


Right. You can play with keepalives* - settings to avoid problems.


Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Repmgr + pgbouncer - Notification of master promotion to application level ...

2017-06-15 Thread Rory Campbell-Lange
On 15/06/17, Andreas Kretschmer (andr...@a-kretschmer.de) wrote:
> 
> Am 15.06.2017 um 01:18 schrieb Martin Goodson:
> >
> >...Do people setup pgbouncer nodes on the database servers
> >themselves, on application servers, in the middle tier between the
> >application and database, and so forth, or some combination of the
> >three? 
> 
> Usually we recommend to install pgbouncer on the app-servers.
> 
> If you have full control of the application you can try to integrate the
> logic into the application (provide a list of servers, the new pg10-version
> of libpg is working similar in this way:
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=274bb2b3857cc987cfa21d14775cae9b0dababa5
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=721f7bd3cbccaf8c07cad2707826b83f84694832
> )

Hi Andreas

The list of servers idea is a cool enhancement. However would pgbouncer
(or another client) be able to detect which of those servers were in slave
mode?

Otherwise, if there is a temporary glitch in communications with the
master, a client (such as pgbouncer) could move to try inserts on a
slave.

Rory


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ordering of window functions with no order specified?

2017-06-15 Thread Ken Tanzer
Thanks for the replies!


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Repmgr + pgbouncer - Notification of master promotion to application level ...

2017-06-15 Thread Rory Campbell-Lange
On 15/06/17, Martin Goodson (kaema...@googlemail.com) wrote:
> On 14/06/2017 19:54, Rory Campbell-Lange wrote:
> >On 14/06/17, Martin Goodson (kaema...@googlemail.com) wrote:
> >>The new master's repmgr promote script will execute commands to pause
> >>pgbouncer, reconfigure pgbouncer to point to the new database address, and
> >>then resume.
> >
> >You could just move the service ip address at the new postgresql master
> >to which the pgbouncer instances on each app server could reconnect.
> >
> >I assume, with reference to https://github.com/2ndQuadrant/repmgr, that
> >you could use repmgrd to trigger a script to do just that at
> >"switchover", possibly something you would need to monitor the
> >"repl_events" table to achieve.

> I'm just wondering how people may have implemented this. Do people setup
> pgbouncer nodes on the database servers themselves, on application servers,

We have pgbouncer on the application servers and shift the postgres
master ip address to the promoted master.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general