Re: jsonb_set() strictness considered harmful to data

2019-10-22 Thread David G. Johnston
On Tue, Oct 22, 2019 at 3:55 PM Peter J. Holzer  wrote:

> On 2019-10-20 13:20:23 -0700, Steven Pousty wrote:
> > I would think though that raising an exception is better than a default
> > behavior which deletes data.
> > As an app dev I am quite used to all sorts of "APIs" throwing exceptions
> and
> > have learned to deal with them.
> >
> > This is my way of saying that raising an exception is an improvement
> over the
> > current situation. May not be the "best" solution but definitely an
> > improvement.
>
> I somewhat disagree. SQL isn't in general a language which uses
> exceptions a lot. It does have the value NULL to mean "unknown", and
> generally unknown combined with something else results in an unknown
> value again:
>
[...]

>
> Throwing an exception for a pure function seems "un-SQLy" to me. In
> particular, jsonb_set does something similar for json values as replace
> does for strings, so it should behave similarly.
>

Now if only the vast majority of users could have and keep this level of
understanding in mind while writing complex queries so that they remember
to always add protections to compensate for the unique design decision that
SQL has taken here...

In this case I would favor a break from the historical to a more safe
design, regardless of its novelty in the codebase, since the usage patterns
and risks involved with typical JSON using code are considerably
different/larger than those for "replace".

Just because its always been done one way, and we won't change existing
code, doesn't mean we shouldn't apply lessons learned to newer code.  In
the case of JSON maybe its too late to worry about changing (though moving
to exception is safe) but a policy choice now could at least pave the way
to avoid this situation when the next new datatype is implemented.  In many
functions we do provoke exceptions when known invalid input is provided -
supplying a function with a primary/important argument being undefined
should fall into the same "malformed" category of problematic input.

David J.


Re: jsonb_set() strictness considered harmful to data

2019-10-22 Thread Peter J. Holzer
On 2019-10-22 09:16:05 +1100, raf wrote:
> Steven Pousty wrote:
> > In a perfect world I would agree with you. But often users do not read ALL
> > the documentation before they use the function in their code OR they are
> > not sure that the condition applies to them (until it does).
> 
> I'm well aware of that, hence the statement that this
> information needs to appear at the place in the
> documentation where the user is first going to
> encounter the function (i.e. in the table where its
> examples are).

I think this is a real weakness of the tabular format used for
documenting functions: While it is quite compact which is nice if you
just want to look up a function's name or parameters, it really
discourages explanations longer than a single paragraph.

Section 9.9 gets around this problem by limiting the in-table
description to a few words and "see Section 9.9.x". So you basically
have to read the text and not just the table. Maybe that would make
sense for the json functions, too?

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: jsonb_set() strictness considered harmful to data

2019-10-22 Thread Peter J. Holzer
On 2019-10-21 09:39:13 -0700, Steven Pousty wrote:
> Turning a JSON null into a SQL null  and thereby "deleting" the data
> is not the path of least surprises.

But it doesn't do that: A JSON null is perfectly fine:

wds=> select jsonb_set('{"a": 1, "b": 2}'::jsonb, '{c}', 'null'::jsonb);
╔═╗
║  jsonb_set  ║
╟─╢
║ {"a": 1, "b": 2, "c": null} ║
╚═╝
(1 row)


It is trying to replace a part of the JSON object with an SQL NULL (i.e.
unknown) which returns SQL NULL:

wds=> select jsonb_set('{"a": 1, "b": 2}'::jsonb, '{c}', NULL);
╔═══╗
║ jsonb_set ║
╟───╢
║ (∅)   ║
╚═══╝
(1 row)

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: jsonb_set() strictness considered harmful to data

2019-10-22 Thread Peter J. Holzer
On 2019-10-20 13:20:23 -0700, Steven Pousty wrote:
> I would think though that raising an exception is better than a default
> behavior which deletes data.
> As an app dev I am quite used to all sorts of "APIs" throwing exceptions and
> have learned to deal with them.
> 
> This is my way of saying that raising an exception is an improvement over the
> current situation. May not be the "best" solution but definitely an
> improvement.

I somewhat disagree. SQL isn't in general a language which uses
exceptions a lot. It does have the value NULL to mean "unknown", and
generally unknown combined with something else results in an unknown
value again:

% psql wds
Null display is "(∅)".
Line style is unicode.
Border style is 2.
Unicode border line style is "double".
Timing is on.
Expanded display is used automatically.
psql (11.5 (Ubuntu 11.5-3.pgdg18.04+1))
Type "help" for help.

wds=> select 4 + NULL;
╔══╗
║ ?column? ║
╟──╢
║  (∅) ║
╚══╝
(1 row)

Time: 0.924 ms
wds=> select replace('steven', 'e', NULL);
╔═╗
║ replace ║
╟─╢
║ (∅) ║
╚═╝
(1 row)

Time: 0.918 ms

Throwing an exception for a pure function seems "un-SQLy" to me. In
particular, jsonb_set does something similar for json values as replace
does for strings, so it should behave similarly.

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: Upgrade mode will prevent the installer .... (pgAgent)

2019-10-22 Thread Kris Deugau

Boylan, Ross wrote:

I can't interpret this message, or figure out whether I  should select the 
option or not.
My main problem is that my intuition is that checking "upgrade mode" will 
perform an upgrade, but the text of the message seems to say that checking upgrade mode 
will  NOT perform an upgrade.

Since I don't know if images are OK, I'll try to transcribe the message:
"Upgrade mode will prevent the installer from modifying the database cluster by 
loading pgAgent SQL scripts in preparation for upgrading from earlier releases of 
PostgreSQL using the pg_upgrade tool."


It's a fine example of twisty English that's been compacted down for 
length instead of clarity;  it took me a couple of passes to decide this 
is probably what it means:


If you check the box, the installation process of pgAgent will not load 
certain (any?) pgAgent scripts.  These scripts are for pgAgent's own 
internal use, when you upgrade Postgres itself using pg_upgrade.




Context: Running the Windows Installer for v12 with v11 still installed on the 
machine.  After the main install I selected several packages, including 
pgagent, which I also selected for v11. I have a mild preference for treating 
12 as a clean install without migrating anything from 11, since it was just a 
test database.


It sounds like you probably want to leave this unchecked, based on "I 
have a mild preference for treating 12 as a clean install without 
migrating anything from 11".


-kgd




Upgrade mode will prevent the installer .... (pgAgent)

2019-10-22 Thread Boylan, Ross
I can't interpret this message, or figure out whether I  should select the 
option or not.
My main problem is that my intuition is that checking "upgrade mode" will 
perform an upgrade, but the text of the message seems to say that checking 
upgrade mode will  NOT perform an upgrade.

Since I don't know if images are OK, I'll try to transcribe the message:
"Upgrade mode will prevent the installer from modifying the database cluster by 
loading pgAgent SQL scripts in preparation for upgrading from earlier releases 
of PostgreSQL using the pg_upgrade tool."

Context: Running the Windows Installer for v12 with v11 still installed on the 
machine.  After the main install I selected several packages, including 
pgagent, which I also selected for v11. I have a mild preference for treating 
12 as a clean install without migrating anything from 11, since it was just a 
test database.

I shut down postgres 11 and its pgagent from the Windows services dialogue, and 
then changed them to manual start.  My plan is to uninstall them if things go 
OK with 12.  Both are currently configured to use the same port.

Thanks.
Ross



Re: A question about sequences and backup/restore cycles

2019-10-22 Thread Stan Brown
Oh it is the one we are working on.

One of my team members brought up this issue from a job where we worked on
a vendor designed one.

I am convince we do not have an issue now.

Thanks for your expertise.


On Tue, Oct 22, 2019 at 4:42 PM Adrian Klaver 
wrote:

> On 10/22/19 1:35 PM, stan wrote:
> > On Tue, Oct 22, 2019 at 12:48:41PM -0700, Adrian Klaver wrote:
> >> On 10/22/19 10:48 AM, stan wrote:
> >> Please reply to list also:
> >> Ccing list.
> >>
> >>
> >>>
> >>> Sorry if my description was not clear.
> >>>
> >>> No, we  do not mix test, and production data. Let me try to clarify the
> >>> question. Looking at a pg_dump, I see the following:
> >>>
> >>>
> >>> CREATE SEQUENCE public.customer_key_serial
> >>>   START WITH 1
> >>> INCREMENT BY 1
> >>> NO MINVALUE
> >>> NO MAXVALUE
> >>> CACHE 1;
> >>>
> >>>
> >>> Yet, in the same pg_dump file I have:
> >>>
> >>>
> >>>
> >>>
> >>> COPY public.customer (customer_key, cust_no, name, c_type, location,
> >>> bill_address_1, bill_address_2, bill_city, bill_state, bill_zip,
> >>> bill_country, bill_attention, bill_addressee, ship_address_1,
> >>> ship_address_2, ship_addressee, ship_attention, ship_city, ship_state,
> >>> ship_zip, office_phone_area_code, office_phone_exchange,
> >>> office_phone_number, office_phone_extension, cell_phone_area_code,
> >>> cell_phone_exchange, cell_phone_number, ship_phone_area_code,
> >>> ship_phone_exchange, ship_phone_number, ship_phone_extension,
> >>> fax_phone_area_code, fax_phone_exchange, fax_phone_number, status,
> modtime)
> >>> FROM stdin;
> >>> 1   12  Made Up Inc.INDUSTRIAL_CONVEYOR \N  101
> Nowhere
> >>> Ave.\N  LaGrangeGA  0   \N  \N  \N
> >>> \N  \N  \N  \N  \N  \N  0   \N  \N
>   \N
> >>>  \N  \N  \N  \N  \N  \N  \N  \N  \N
> >>>  \N  \N  ACTIVE  2019-09-30 23:55:04.594203+00
> >>>  2   5   Jimmys Favorite Customer.   PLASTICS\N
> >>>  56 Somewhere St.\N  No Such CitySC  0   \N
> >>>  \N  \N  \N  \N  \N  \N  \N  \N
> 0
> >>>  \N  \N  \N  \N  \N  \N  \N  \N  \N
> >>>  \N  \N  \N  \N  \N  ACTIVE  2019-09-30
> >>>  23:55:04.636827+00
> >>>
> >>> So it appears to me the customer table is going to get (correctly)
> populated
> >>> with the originally generated keys, yet the sequence will want to
> return a 1
> >>> the next time it is called, when a new customer gets inserted.
> >>>
> >>> Am I missing something here?
> >>>
> >>
> >> Yes something like this, in dump file, for non-serial sequence:
> >>
> >> CREATE SEQUENCE public.plant1_p_item_no_seq
> >>  START WITH 1
> >>  INCREMENT BY 1
> >>  NO MINVALUE
> >>  NO MAXVALUE
> >>  CACHE 1;
> >>
> >> SELECT pg_catalog.setval('public.plant1_p_item_no_seq', 5509, true);
> >>
> >> or for serial sequence:
> >>
> >> CREATE SEQUENCE public.avail_headers_line_id_seq
> >>  AS integer
> >>  START WITH 1
> >>  INCREMENT BY 1
> >>  NO MINVALUE
> >>  NO MAXVALUE
> >>  CACHE 1;
> >>
> >> ALTER TABLE ONLY public.avail_headers ALTER COLUMN line_id SET DEFAULT
> >> nextval('public.avail_headers_line_id_seq'::regclass);
> >>
> >>
> >> SELECT pg_catalog.setval('public.avail_headers_line_id_seq', 53, true);
> >>
> >> If you want to see how things are run on a higher level do something
> like:
> >>
> >> pg_dump -Fc -d some_db -f db.out
> >>
> >> pg_restore -l db_out > db_toc.txt
> >>
> >> -l on pg_restore creates a TOC(table of contents) showing the ordering
> of
> >> the schema recreation.
> >>
> >
> > Oh, excellent. !!
> >
> > Thanks for the patience to teach me about this.
> >
> > Does make me wonder what the vendor did to create our issue on their
> > database.
> >
>
> I thought it was your database you where concerned about?
>
> In any case tracking down the issue would require more information then
> has been provided. Like I said previously I would start with automated
> scripts that did not get the memo about the database changing under them.
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


-- 

UNIX is basically a simple operating system, but you have to be a genius to
understand the simplicity.

Dennis Ritchie 


Re: Primary key definition?

2019-10-22 Thread Michael Lewis
>
> > CREATE TABLE books (
> > id  SERIAL PRIMARY KEY,
> >
> > Which has the advantage of not having to manually create the sequences.
> Will
> > this also enforce that the "internally created sequence" will be
> initialized
> > to a value above the maximum key in use on a pg_restore?
>

I think you will still run into the same issue if your sequence is not
getting the proper value as max(id) or max(id) +1, not sure which is
actually needed. You may get some benefits from using IDENTITY rather than
the pseudo-type of serial, as described in this blog post-

https://www.2ndquadrant.com/en/blog/postgresql-10-identity-columns/

Still, depending on how you are doing the data restore, you may need
something like this to ensure the sequence is updated.

select setval( 'table_id_seq', ( select max(id) + 1 from table ) );


SQL Error [0A000]: ERROR: nondeterministic collations are not supported for LIKE

2019-10-22 Thread Jeff Lanzarotta
Hello,

I have a question about nondeterministic collations.  I have created a new
collation that is  nondeterministic and created several columns which use
this collation.  Querying these columns works great until I use LIKE.  When
I do, I get the following error:

SQL Error [0A000]: ERROR: nondeterministic collations are not supported for
LIKE

Is there any plan to allow this functionality?

Thanks,


Re: A question about sequences and backup/restore cycles

2019-10-22 Thread Adrian Klaver

On 10/22/19 1:35 PM, stan wrote:

On Tue, Oct 22, 2019 at 12:48:41PM -0700, Adrian Klaver wrote:

On 10/22/19 10:48 AM, stan wrote:
Please reply to list also:
Ccing list.




Sorry if my description was not clear.

No, we  do not mix test, and production data. Let me try to clarify the
question. Looking at a pg_dump, I see the following:


CREATE SEQUENCE public.customer_key_serial
  START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;


Yet, in the same pg_dump file I have:




COPY public.customer (customer_key, cust_no, name, c_type, location,
bill_address_1, bill_address_2, bill_city, bill_state, bill_zip,
bill_country, bill_attention, bill_addressee, ship_address_1,
ship_address_2, ship_addressee, ship_attention, ship_city, ship_state,
ship_zip, office_phone_area_code, office_phone_exchange,
office_phone_number, office_phone_extension, cell_phone_area_code,
cell_phone_exchange, cell_phone_number, ship_phone_area_code,
ship_phone_exchange, ship_phone_number, ship_phone_extension,
fax_phone_area_code, fax_phone_exchange, fax_phone_number, status, modtime)
FROM stdin;
1   12  Made Up Inc.INDUSTRIAL_CONVEYOR \N  101 Nowhere
Ave.\N  LaGrangeGA  0   \N  \N  \N
\N  \N  \N  \N  \N  \N  0   \N  \N  \N
 \N  \N  \N  \N  \N  \N  \N  \N  \N
 \N  \N  ACTIVE  2019-09-30 23:55:04.594203+00
 2   5   Jimmys Favorite Customer.   PLASTICS\N
 56 Somewhere St.\N  No Such CitySC  0   \N
 \N  \N  \N  \N  \N  \N  \N  \N  0
 \N  \N  \N  \N  \N  \N  \N  \N  \N
 \N  \N  \N  \N  \N  ACTIVE  2019-09-30
 23:55:04.636827+00

So it appears to me the customer table is going to get (correctly) populated
with the originally generated keys, yet the sequence will want to return a 1
the next time it is called, when a new customer gets inserted.

Am I missing something here?



Yes something like this, in dump file, for non-serial sequence:

CREATE SEQUENCE public.plant1_p_item_no_seq
 START WITH 1
 INCREMENT BY 1
 NO MINVALUE
 NO MAXVALUE
 CACHE 1;

SELECT pg_catalog.setval('public.plant1_p_item_no_seq', 5509, true);

or for serial sequence:

CREATE SEQUENCE public.avail_headers_line_id_seq
 AS integer
 START WITH 1
 INCREMENT BY 1
 NO MINVALUE
 NO MAXVALUE
 CACHE 1;

ALTER TABLE ONLY public.avail_headers ALTER COLUMN line_id SET DEFAULT
nextval('public.avail_headers_line_id_seq'::regclass);


SELECT pg_catalog.setval('public.avail_headers_line_id_seq', 53, true);

If you want to see how things are run on a higher level do something like:

pg_dump -Fc -d some_db -f db.out

pg_restore -l db_out > db_toc.txt

-l on pg_restore creates a TOC(table of contents) showing the ordering of
the schema recreation.



Oh, excellent.!!

Thanks for the patience to teach me about this.

Does make me wonder what the vendor did to create our issue on their
database.



I thought it was your database you where concerned about?

In any case tracking down the issue would require more information then 
has been provided. Like I said previously I would start with automated 
scripts that did not get the memo about the database changing under them.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: A question about sequences and backup/restore cycles

2019-10-22 Thread stan
On Tue, Oct 22, 2019 at 12:48:41PM -0700, Adrian Klaver wrote:
> On 10/22/19 10:48 AM, stan wrote:
> Please reply to list also:
> Ccing list.
> 
> 
> > 
> > Sorry if my description was not clear.
> > 
> > No, we  do not mix test, and production data. Let me try to clarify the
> > question. Looking at a pg_dump, I see the following:
> > 
> > 
> > CREATE SEQUENCE public.customer_key_serial
> >  START WITH 1
> > INCREMENT BY 1
> > NO MINVALUE
> > NO MAXVALUE
> > CACHE 1;
> > 
> > 
> > Yet, in the same pg_dump file I have:
> > 
> > 
> > 
> > 
> > COPY public.customer (customer_key, cust_no, name, c_type, location,
> > bill_address_1, bill_address_2, bill_city, bill_state, bill_zip,
> > bill_country, bill_attention, bill_addressee, ship_address_1,
> > ship_address_2, ship_addressee, ship_attention, ship_city, ship_state,
> > ship_zip, office_phone_area_code, office_phone_exchange,
> > office_phone_number, office_phone_extension, cell_phone_area_code,
> > cell_phone_exchange, cell_phone_number, ship_phone_area_code,
> > ship_phone_exchange, ship_phone_number, ship_phone_extension,
> > fax_phone_area_code, fax_phone_exchange, fax_phone_number, status, modtime)
> > FROM stdin;
> > 1   12  Made Up Inc.INDUSTRIAL_CONVEYOR \N  101 Nowhere
> > Ave.\N  LaGrangeGA  0   \N  \N  \N
> > \N  \N  \N  \N  \N  \N  0   \N  \N  \N
> > \N  \N  \N  \N  \N  \N  \N  \N  \N
> > \N  \N  ACTIVE  2019-09-30 23:55:04.594203+00
> > 2   5   Jimmys Favorite Customer.   PLASTICS\N
> > 56 Somewhere St.\N  No Such CitySC  0   \N
> > \N  \N  \N  \N  \N  \N  \N  \N  0
> > \N  \N  \N  \N  \N  \N  \N  \N  \N
> > \N  \N  \N  \N  \N  ACTIVE  2019-09-30
> > 23:55:04.636827+00
> > 
> > So it appears to me the customer table is going to get (correctly) populated
> > with the originally generated keys, yet the sequence will want to return a 1
> > the next time it is called, when a new customer gets inserted.
> > 
> > Am I missing something here?
> > 
> 
> Yes something like this, in dump file, for non-serial sequence:
> 
> CREATE SEQUENCE public.plant1_p_item_no_seq
> START WITH 1
> INCREMENT BY 1
> NO MINVALUE
> NO MAXVALUE
> CACHE 1;
> 
> SELECT pg_catalog.setval('public.plant1_p_item_no_seq', 5509, true);
> 
> or for serial sequence:
> 
> CREATE SEQUENCE public.avail_headers_line_id_seq
> AS integer
> START WITH 1
> INCREMENT BY 1
> NO MINVALUE
> NO MAXVALUE
> CACHE 1;
> 
> ALTER TABLE ONLY public.avail_headers ALTER COLUMN line_id SET DEFAULT
> nextval('public.avail_headers_line_id_seq'::regclass);
> 
> 
> SELECT pg_catalog.setval('public.avail_headers_line_id_seq', 53, true);
> 
> If you want to see how things are run on a higher level do something like:
> 
> pg_dump -Fc -d some_db -f db.out
> 
> pg_restore -l db_out > db_toc.txt
> 
> -l on pg_restore creates a TOC(table of contents) showing the ordering of
> the schema recreation.
> 

Oh, excellent.!!

Thanks for the patience to teach me about this.

Does make me wonder what the vendor did to create our issue on their
database.

-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: Primary key definition?

2019-10-22 Thread Ron

On 10/22/19 2:47 PM, stan wrote:

I  started doing the following to define my primary keys long ago and in a
universe far away:

CREATE TABLE employee (
employee_key   integer DEFAULT nextval('employee_key_serial')
PRIMARY KEY ,

WEE ran into a scenario, after a total db restore on a project where we got
errors inserting new data because the keys were duplicated. Looking at a
pg_dump, it appears to me that I now understand why. Although the values f
the keys, and the data structures that reference them look like they will
get restored correctly, it appears to me that the sequences get recreated with
an initial value of 1, which means that on the next insert we will get 1 for
a key, which likely is already used. Looks like this is a different way of
defining this:


CREATE TABLE books (
id  SERIAL PRIMARY KEY,

Which has the advantage of not having to manually create the sequences. Will
this also enforce that the "internally created sequence" will be initialized
to a value above the maximum key in use on a pg_restore?


When I restored an 8.4 database to 9.6 using pg_dump/pg_restore, all of the 
sequences were restored using the sequence values at the time the pg_dump ran.



--
Angular momentum makes the world go 'round.




Re: Primary key definition?

2019-10-22 Thread Adrian Klaver

On 10/22/19 12:47 PM, stan wrote:

I  started doing the following to define my primary keys long ago and in a
universe far away:

CREATE TABLE employee (
employee_key   integer DEFAULT nextval('employee_key_serial')
PRIMARY KEY ,

WEE ran into a scenario, after a total db restore on a project where we got
errors inserting new data because the keys were duplicated. Looking at a
pg_dump, it appears to me that I now understand why. Although the values f
the keys, and the data structures that reference them look like they will
get restored correctly, it appears to me that the sequences get recreated with
an initial value of 1, which means that on the next insert we will get 1 for
a key, which likely is already used. Looks like this is a different way of
defining this:


CREATE TABLE books (
id  SERIAL PRIMARY KEY,

Which has the advantage of not having to manually create the sequences. Will
this also enforce that the "internally created sequence" will be initialized
to a value above the maximum key in use on a pg_restore?


See my reply to your off-list post for why this is not correct. My guess 
is you got the duplicate key errors because the new data you inserted 
after the restore was using PK values that overlapped the old data. That 
is probably down to some script not starting at a value > max(PK).









--
Adrian Klaver
adrian.kla...@aklaver.com




Re: A question about sequences and backup/restore cycles

2019-10-22 Thread Adrian Klaver

On 10/22/19 10:48 AM, stan wrote:
Please reply to list also:
Ccing list.




Sorry if my description was not clear.

No, we  do not mix test, and production data. Let me try to clarify the
question. Looking at a pg_dump, I see the following:


CREATE SEQUENCE public.customer_key_serial
 START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;


Yet, in the same pg_dump file I have:




COPY public.customer (customer_key, cust_no, name, c_type, location,
bill_address_1, bill_address_2, bill_city, bill_state, bill_zip,
bill_country, bill_attention, bill_addressee, ship_address_1,
ship_address_2, ship_addressee, ship_attention, ship_city, ship_state,
ship_zip, office_phone_area_code, office_phone_exchange,
office_phone_number, office_phone_extension, cell_phone_area_code,
cell_phone_exchange, cell_phone_number, ship_phone_area_code,
ship_phone_exchange, ship_phone_number, ship_phone_extension,
fax_phone_area_code, fax_phone_exchange, fax_phone_number, status, modtime)
FROM stdin;
1   12  Made Up Inc.INDUSTRIAL_CONVEYOR \N  101 Nowhere
Ave.\N  LaGrangeGA  0   \N  \N  \N
\N  \N  \N  \N  \N  \N  0   \N  \N  \N
\N  \N  \N  \N  \N  \N  \N  \N  \N
\N  \N  ACTIVE  2019-09-30 23:55:04.594203+00
2   5   Jimmys Favorite Customer.   PLASTICS\N
56 Somewhere St.\N  No Such CitySC  0   \N
\N  \N  \N  \N  \N  \N  \N  \N  0
\N  \N  \N  \N  \N  \N  \N  \N  \N
\N  \N  \N  \N  \N  ACTIVE  2019-09-30
23:55:04.636827+00

So it appears to me the customer table is going to get (correctly) populated
with the originally generated keys, yet the sequence will want to return a 1
the next time it is called, when a new customer gets inserted.

Am I missing something here?



Yes something like this, in dump file, for non-serial sequence:

CREATE SEQUENCE public.plant1_p_item_no_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;

SELECT pg_catalog.setval('public.plant1_p_item_no_seq', 5509, true);

or for serial sequence:

CREATE SEQUENCE public.avail_headers_line_id_seq
AS integer
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;

ALTER TABLE ONLY public.avail_headers ALTER COLUMN line_id SET DEFAULT 
nextval('public.avail_headers_line_id_seq'::regclass);



SELECT pg_catalog.setval('public.avail_headers_line_id_seq', 53, true);

If you want to see how things are run on a higher level do something like:

pg_dump -Fc -d some_db -f db.out

pg_restore -l db_out > db_toc.txt

-l on pg_restore creates a TOC(table of contents) showing the ordering 
of the schema recreation.



--
Adrian Klaver
adrian.kla...@aklaver.com




Primary key definition?

2019-10-22 Thread stan
I  started doing the following to define my primary keys long ago and in a
universe far away:

CREATE TABLE employee (
employee_key   integer DEFAULT nextval('employee_key_serial')
PRIMARY KEY ,

WEE ran into a scenario, after a total db restore on a project where we got
errors inserting new data because the keys were duplicated. Looking at a
pg_dump, it appears to me that I now understand why. Although the values f
the keys, and the data structures that reference them look like they will
get restored correctly, it appears to me that the sequences get recreated with
an initial value of 1, which means that on the next insert we will get 1 for
a key, which likely is already used. Looks like this is a different way of
defining this:


CREATE TABLE books (
id  SERIAL PRIMARY KEY,

Which has the advantage of not having to manually create the sequences. Will
this also enforce that the "internally created sequence" will be initialized
to a value above the maximum key in use on a pg_restore?



-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: Recovering disk space

2019-10-22 Thread Jeff Janes
On Thu, Oct 10, 2019 at 3:57 AM stan  wrote:

> However, my 50G disk is still 96% full. How can I recover the disk space I
> seem to have used u doing this?
>

The bloated storage is likely not under PostgreSQL's control.  Use the
tools provided by your OS to figure out what is using the space.

Cheers,

Jeff


Re: Regarding db dump with Fc taking very long time to completion

2019-10-22 Thread Jeff Janes
On Fri, Aug 30, 2019 at 5:51 AM Durgamahesh Manne 
wrote:

> Hi
> To respected international postgresql team
>
> I am using postgresql 11.4 version
> I have scheduled logical dump job which runs daily one time at db level
> There was one table that has write intensive activity for every 40 seconds
> in db
> The size of the table is about 88GB
>  Logical dump of that table is taking more than 7 hours to be completed
>

That seems very slow.  I get about 2 GB per minute on my low-end laptop.
Does your schema use obscure data types which might be hard to process?

Using your system tools (like "top" for linux), what is going on?  Is time
spent in pg_dump itself, or in postgres doing the COPY?  Is it CPU bound or
IO bound?  Can you use "perf top" to see where it is spending its time?

How long does it take if you turn off compression, and stream the output
into the void rather than saving it?

time pg_dump -p  -Fc -Z0 | wc -c

Alternatively, can you use physical rather than logical backups?

Cheers,

Jeff


An issue installing an extension

2019-10-22 Thread stan


I have 3 machines that _should_ be nearly idnetical. On 2 of them, i have
no issues installing an extension I copiled. On one I get the followign
error message:

icadb=# CREATE EXTENSION IF NOT EXISTS pg_libphonenumber;
ERROR:  could not load library
"/usr/lib/postgresql/11/lib/pg_libphonenumber.so": libphonenumber.so.7:
cannot open shared object file: No such file or directory
icadb=# ls -l /usr/lib/postgresql/11/lib/pg_libphonenumber.so


Yet, 

root@ica-db:/home/stan/BUILD/pg-libphonenumber-master# ls -l 
/usr/lib/postgresql/11/lib/pg_libphonenumber.so
-rwxr-xr-x 1 root root 182976 Oct 22 16:11 
/usr/lib/postgresql/11/lib/pg_libphonenumber.so

This appears to be identical to the other machines where this works,

root@smokey:/home/stan/BUILD/pg-libphonenumber-master# logout
stan@smokey:~/BUILD/pg-libphonenumber-master$ ls -l 
/usr/lib/postgresql/11/lib/pg_libphonenumber.so
-rwxr-xr-x 1 root root 182976 Oct 22 09:47 
/usr/lib/postgresql/11/lib/pg_libphonenumber.so


What else could be wrong?


-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: An issue installing an extension

2019-10-22 Thread Tom Lane
stan  writes:
> I have 3 machines that _should_ be nearly idnetical. On 2 of them, i have
> no issues installing an extension I copiled. On one I get the followign
> error message:

> icadb=# CREATE EXTENSION IF NOT EXISTS pg_libphonenumber;
> ERROR:  could not load library
> "/usr/lib/postgresql/11/lib/pg_libphonenumber.so": libphonenumber.so.7:
> cannot open shared object file: No such file or directory
> icadb=# ls -l /usr/lib/postgresql/11/lib/pg_libphonenumber.so

This seems to be complaining about another library "libphonenumber.so.7"
that pg_libphonenumber.so depends on.  I think you forgot to install
that one on this machine.

regards, tom lane




Lookup Primary Key of Foreign Server's Table

2019-10-22 Thread Chris Morris
I'm looking for a system query that will lookup the primary key column on a
fdw table. It's possible we need to declare that part of the foreign
table's schema in the local (is that the right term?) database?

Here's the foreign table - I don't see anything showing a primary key, so
my hunch is we need to declare it in the local schema?




















*=> \d sidecar_link.actions
 Foreign table "sidecar_link.actions" Column  |Type
| Collation | Nullable | Default
   | FDW options
-+-+---+--+--+-
id
 | bigint  |   | not null |
nextval('sidecar_link.actions_id_seq'::regclass) |  user_session_id |
bigint  |   | not null |
   |  user_id | bigint
 |   | not null |
 |  created_at  | timestamp without time zone |   | not null |
now()|  occurred_at |
timestamp without time zone |   | not null | now()
   |  thing_id| integer
|   |  |  |
 parent_thing_id | integer |   |  |
 |  viewing_id  | integer
  |   |  |
 |  origin  | origin  |
  |  | 'mysteryscience'::origin |  scope
| text|   | not null |
 |  name| text
   |   | not null |
 |  details | text|   |
 |  |  request_path
 | text|   |  |
 | Server: pg_mysterysci_sidecarFDW options:
(schema_name 'public', table_name 'actions')*
Not really related question, but a curiosity: why does this table not show
in the list of foreign tables?






*=> \det List of foreign tables Schema | Table | Server
+---+(0 rows)*


Re: existing dblinks

2019-10-22 Thread Adrian Klaver

On 10/21/19 3:26 PM, Julie Nishimura wrote:
Hello, is there any way to find if there are any dblink exist on the 9.6 
postgresql server?


Are looking for whether the extension was installed?

If so then in psql:

\dx

will tell you.

Or are you looking for code that uses dblink?

AFAIK dblink creates connections on demand not from information in a 
persistent store. So there is no table you can query to get the 
information. You would have to search your code(internal and external) 
for references to it.


If there are open connections you can use:

https://www.postgresql.org/docs/11/contrib-dblink-get-connections.html

dblink_get_connections — returns the names of all open named dblink 
connections




Thanks



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Having more than one constraint trigger on a table

2019-10-22 Thread Andreas Joseph Krogh

På tirsdag 22. oktober 2019 kl. 17:12:59, skrev Adrian Klaver <
adrian.kla...@aklaver.com >: 
[snip]
 No.
 When I sort the triggers I get:

 test=# create table trg_str(fld_1 varchar);
 CREATE TABLE
 test=# insert into trg_str values ('trigger_1_update_fts'),
 ('trigger_2'), ('trigger_3'), ('trigger_1_check_nocycle');
 INSERT 0 4
 test=# select * from trg_test order by fld_1 ;
 id | fld_1
 +---
 (0 rows)

 test=# select * from trg_str order by fld_1 ;
 fld_1
 -
 trigger_1_check_nocycle
 trigger_1_update_fts
 trigger_2
 trigger_3

 Is this how you want them to fire as it does not match what you say above?: 

(I know they were not declared in that order, but..) 
Yes, all "trigger_1_*" are the "actuall triggers triggering the logic", 
trigger_2 and trigger_3 are only there as part of the "make constraint-triggers 
fire only once"-mechanism, in which the function in the first trigger is the 
function performing the actual logic. 
So, being I want 2 "logical chunks" to happen I have two "trigger_1"-triggers 
(there is no established terminilogy for this AFAIK), each calling a function 
performing the logick which is to happen only once (per row). 


"The first "main" trigger-function is update_company_fts_tf() ... The
 second "main" trigger-function is company_parent_no_cycle()"

 It might be easier to understand if sketch out a schematic version of
 what you are trying to achieve. 

The point is; I want to functions to be called 

- update_company_fts_tf() 
- company_parent_no_cycle() 

, each only once, as constraint-triggers on the same table. So they are called 
by the "level 1 triggers" which must fire first. 

Is it clearer now what I'm trying to achieve? 


--
 Andreas Joseph Krogh

Re: Having more than one constraint trigger on a table

2019-10-22 Thread Adrian Klaver

On 10/22/19 7:54 AM, Andreas Joseph Krogh wrote:

Hi.
I have the following schema (question at bottom):
==

CREATE TABLE company(idSERIAL PRIMARY KEY, parent_idINTEGER REFERENCES 
company(id)DEFERRABLE INITIALLY DEFERRED ,name VARCHAR NOT NULL, 
duns_numberVARCHAR, fts_alltsvector, t_updatedBOOLEAN);

CREATE or replace FUNCTION update_company_fts(p_company_idinteger)RETURNS VOID 
AS $$ BEGIN UPDATE company comp
 SET fts_all =to_tsvector('simple' , comp.name
   || ' ' || coalesce(comp.duns_number,'')
 )
 WHERE comp.id = p_company_id;

 raise notice 'Running update of %', p_company_id;
END;
$$ LANGUAGE plpgsql;

-- re-index all: CREATE OR REPLACE FUNCTION index_company()RETURNS VOID AS $$ 
DECLARE v_company_idINTEGER;
begin FOR v_company_idIN (SELECT idFROM company)
 LOOP perform update_company_fts(v_company_id);
 END LOOP;
END;
$$ LANGUAGE plpgsql;

create or replace function update_company_fts_tf()returns TRIGGER AS $$ declare 
v_company_idINTEGER;
BEGIN v_company_id :=NEW.id;
 perform update_company_fts(v_company_id);
 RETURN NULL;
END;
$$ LANGUAGE plpgsql;


-- General cleanup functions for constraint triggers CREATE OR REPLACE 
FUNCTION trigger_function_set_updated()returns TRIGGER AS $$ BEGIN update company set t_updated =TRUE WHERE id =NEW.id;

 RETURN NULL;
END;
$$ LANGUAGE plpgsql;

CREATE OR REPLACE FUNCTION trigger_function_clear_updated()returns TRIGGER AS 
$$ BEGIN update company set t_updated =NULL WHERE id =NEW.id;
 RETURN NULL;
END;
$$ LANGUAGE plpgsql;

CREATE CONSTRAINT TRIGGER trigger_1_update_fts
 AFTER INSERT OR UPDATE of name, duns_number
 ON company DEFERRABLE INITIALLY DEFERRED FOR EACH ROW WHEN 
(NEW.t_updatedIS NULL)
EXECUTE PROCEDURE update_company_fts_tf();

CREATE CONSTRAINT TRIGGER trigger_2
 AFTER INSERT OR UPDATE of name, duns_number, parent_id
 ON company -- NOT DEFERRED FOR EACH ROW WHEN (NEW.t_updatedIS NULL)
EXECUTE PROCEDURE trigger_function_set_updated();

CREATE CONSTRAINT TRIGGER trigger_3
 AFTER INSERT OR UPDATE OF t_updated
 ON company DEFERRABLE INITIALLY DEFERRED FOR EACH ROW WHEN (NEW.t_updated)
EXECUTE PROCEDURE trigger_function_clear_updated();

CREATE OR REPLACE FUNCTION company_parent_no_cycle()returns TRIGGER AS $$ BEGIN 
IF (WITH recursive tr (id, parent_id, all_ids,cycle)AS (
 SELECT id, parent_id,ARRAY [id],false FROM company tr
 WHERE id =NEW.id UNION ALL SELECT t.id, t.parent_id, all_ids|| t.id, 
t.id =ANY (all_ids)
 FROM company t
  JOIN trON t.parent_id = tr.id AND NOT cycle)
 SELECT count(*)
 FROM tr
 where cycle =true) >0 THEN RAISE EXCEPTION 'Cannot have cyclic parent relations for company' 
USING SCHEMA = TG_TABLE_SCHEMA,TABLE = TG_TABLE_NAME,CONSTRAINT = TG_NAME

 ,ERRCODE ='23514'/*check_violation*/,COLUMN ='parent_id';
 END IF;
 RETURN NULL;
END;
$$ LANGUAGE plpgsql;

CREATE CONSTRAINT TRIGGER trigger_1_check_nocycle
 AFTER INSERT OR UPDATE of parent_id
 ON company DEFERRABLE INITIALLY DEFERRED FOR EACH ROW WHEN 
(NEW.t_updatedIS NULL)
EXECUTE PROCEDURE company_parent_no_cycle();

==
What I'm after is to have 2 "logical constraint-triggers" perform logic 
/only once (each)/ on the "company"-table.
To make constraint-triggers fire only once (in PostgreSQL) a common 
method is to have a schema with 3 triggers, and a "magic" t_updated 
column, and they must be named so they (the triggers, not the 
trigger-functions) are fired in lexical order (alphabetically).  And 
it's important that the 2nd. trigger (here "trigger_2") is NOT deferred.
In my schema above I have 2 "logical chuchks" which each perform some 
stuff and shall only do it once per row at commit-time.
The first "main" trigger-function is /update_company_fts_tf()/ and it 
updates a column (fts_all) of type tsvector. This is done in a trigger 
so that it may add stuff (customer-number etc.) from other tables as 
needed (which is not possible with PG-12's new STORED-columns).
The second "main" trigger-function is /company_parent_no_cycle()/ 
and  assures there are no parent/child-cycles.

Question:
1. I have "re-used" trigger_2 and trigger_3, and trigger_2's "...OR 
UPDATE OF"-list is the sum of all columns updated(used) in the 2 
main-triggers, that is "name", "duns_number" and parent_id. trigger_3 
only checks t_updated.

Is this correct usage, can I assume this will work correctly?
2. If I need a 3rd "logical trigger", is it enough to add another 
trigger named accordingly, for instance "trigger_1_someotherstuff", and 
add it's column to the "UPDATE OF"-list of "trigger_2" (it it uses a 
column not already listed there)?

3. Is there some easier way to do this?
Is it clear what I'm asking about? :-)


No.
When I sort the triggers I get:

test=# create table trg_str(fld_1 varchar);
CREATE TABLE
test=# insert into trg_str values 

Having more than one constraint trigger on a table

2019-10-22 Thread Andreas Joseph Krogh

Hi. 

I have the following schema (question at bottom): 
== 
CREATE TABLE company(id SERIAL PRIMARY KEY, parent_id INTEGER REFERENCES 
company(id) DEFERRABLE INITIALLY DEFERRED , name VARCHAR NOT NULL, duns_number 
VARCHAR, fts_all tsvector, t_updated BOOLEAN); CREATE or replace FUNCTION 
update_company_fts(p_company_id integer) RETURNS VOID AS $$ BEGIN  UPDATE 
companycomp SET fts_all = to_tsvector('simple' , comp.name || ' ' || coalesce
(comp.duns_number,'') ) WHERE comp.id = p_company_id; raise notice 'Running 
update of %', p_company_id; END; $$ LANGUAGE plpgsql; -- re-index all: CREATE 
OR REPLACE FUNCTIONindex_company() RETURNS VOID AS $$ DECLARE v_company_id 
INTEGER; begin  FOR v_company_id IN (SELECT id FROM company) LOOP  perform 
update_company_fts(v_company_id); END LOOP; END; $$ LANGUAGE plpgsql; create or 
replace functionupdate_company_fts_tf() returns TRIGGER AS $$ declare 
v_company_idINTEGER; BEGIN v_company_id := NEW.id; perform update_company_fts
(v_company_id);RETURN NULL; END; $$ LANGUAGE plpgsql; -- General cleanup 
functions for constraint triggersCREATE OR REPLACE FUNCTION 
trigger_function_set_updated() returns TRIGGER AS $$ BEGIN  update company set 
t_updated =TRUE WHERE id = NEW.id; RETURN NULL; END; $$ LANGUAGE plpgsql; 
CREATE OR REPLACE FUNCTIONtrigger_function_clear_updated() returns TRIGGER AS $$
BEGIN update company set t_updated = NULL WHERE id = NEW.id; RETURN NULL; END; 
$$LANGUAGE plpgsql; CREATE CONSTRAINT TRIGGER trigger_1_update_fts AFTER INSERT 
OR UPDATE of name, duns_number ON company DEFERRABLE INITIALLY DEFERRED  FOR 
EACH ROW WHEN (NEW.t_updated IS NULL) EXECUTE PROCEDURE update_company_fts_tf();
CREATE CONSTRAINT TRIGGERtrigger_2 AFTER INSERT OR UPDATE of name, duns_number, 
parent_idON company -- NOT DEFERRED FOR EACH ROW  WHEN (NEW.t_updated IS NULL) 
EXECUTE PROCEDUREtrigger_function_set_updated(); CREATE CONSTRAINT TRIGGER 
trigger_3AFTER INSERT OR UPDATE OF t_updated ON company DEFERRABLE INITIALLY 
DEFERRED FOR EACH ROW  WHEN (NEW.t_updated) EXECUTE PROCEDURE 
trigger_function_clear_updated(); CREATE OR REPLACE FUNCTION 
company_parent_no_cycle() returns TRIGGER AS $$ BEGIN  IF (WITH recursive tr 
(id, parent_id, all_ids,cycle) AS ( SELECT id, parent_id, ARRAY [id], false  
FROMcompany tr WHERE id = NEW.id UNION ALL  SELECT t.id, t.parent_id, all_ids ||
t.id, t.id =ANY (all_ids) FROM company t JOIN tr ON t.parent_id = tr.id AND NOT 
cycle) SELECT count(*) FROM tr where cycle = true) > 0 THEN  RAISE EXCEPTION 
'Cannot have cyclic parent relations for company' USING SCHEMA = 
TG_TABLE_SCHEMA,TABLE = TG_TABLE_NAME, CONSTRAINT = TG_NAME , ERRCODE = '23514'
/*check_violation*/, COLUMN = 'parent_id'; END IF; RETURN NULL; END; $$ LANGUAGE
plpgsql;CREATE CONSTRAINT TRIGGER trigger_1_check_nocycle AFTER INSERT OR 
UPDATE ofparent_id ON company DEFERRABLE INITIALLY DEFERRED  FOR EACH ROW  WHEN 
(NEW.t_updated IS NULL) EXECUTE PROCEDURE company_parent_no_cycle(); 
== 

What I'm after is to have 2 "logical constraint-triggers" perform logic only 
once (each) on the "company"-table. 
To make constraint-triggers fire only once (in PostgreSQL) a common method is 
to have a schema with 3 triggers, and a "magic" t_updated column, and they must 
be named so they (the triggers, not the trigger-functions) are fired in lexical 
order (alphabetically). And it's important that the 2nd. trigger (here 
"trigger_2") is NOT deferred. 

In my schema above I have 2 "logical chuchks" which each perform some stuff 
and shall only do it once per row at commit-time. 
The first "main" trigger-function is update_company_fts_tf() and it updates a 
column (fts_all) of type tsvector. This is done in a trigger so that it may add 
stuff (customer-number etc.) from other tables as needed (which is not possible 
with PG-12's new STORED-columns). 
The second "main" trigger-function is company_parent_no_cycle() and assures 
there are no parent/child-cycles. 

Question: 
1. I have "re-used" trigger_2 and trigger_3, and trigger_2's "...OR UPDATE 
OF"-list is the sum of all columns updated(used) in the 2 main-triggers, that 
is "name", "duns_number" and parent_id. trigger_3 only checks t_updated. 
Is this correct usage, can I assume this will work correctly? 
2. If I need a 3rd "logical trigger", is it enough to add another trigger 
named accordingly, for instance "trigger_1_someotherstuff", and add it's column 
to the "UPDATE OF"-list of "trigger_2" (it it uses a column not already listed 
there)? 
3. Is there some easier way to do this? 

Is it clear what I'm asking about? :-) 

Thanks. 


--
 Andreas Joseph Krogh 

Re: FW: Re: A question about building pg-libphonenumber

2019-10-22 Thread Josef Šimánek
Hello.

You can take a look at GitHub guides -
https://guides.github.com/activities/forking/#making-a-pull-request. Anyway
if you would like to just contribute this one-timer, feel free to assign
patch here and I'll open pull requests mentioning you as an author and
linking this thread in there on GitHub.

Alternatively I think you can try to contact author directly (you can find
some contact info at https://github.com/blm768).

út 22. 10. 2019 v 15:57 odesílatel stan  napsal:

> - Forwarded message from Pavel Stehule  -
>
> Date: Tue, 22 Oct 2019 14:43:17 +0200
> From: Pavel Stehule 
> To: stan 
> Cc: pgsql-general 
> Subject: Re: A question about building pg-libphonenumber
> List-Id: 
>
> ??t 22. 10. 2019 v 14:37 odes??latel stan  napsal:
>
> > On Tue, Oct 22, 2019 at 02:19:15PM +0200, Pavel Stehule wrote:
> > > Hi
> > >
> > > ??t 22. 10. 2019 v 14:15 odes??latel stan  napsal:
> > >
> > > > OK, this seems a stupid question, but I do not see the answer.
> > > >
> > > >
> > > > I downloaded pg-libphonenumber, and am going to try to build it for
> PG
> > > > version 11. Looks like the last build was against version 9.6.
> > > >
> > > > It builds fine, the issue comes in when I try to do a make install.
> It
> > > > wants to install in the appropriate directory tree for 9.6. Should
> be a
> > > > simple thing to change, right? But greping around, I cannot seem to
> > find
> > > > where this is defend. it is NOT in the Makefile.
> > > >
> > > > Can someone tell me where to change this to try it against the
> version
> > of
> > > > the DB engine I am running?
> > > >
> > >
> > > The build and installation is controlled by small application -
> pg_config
> > >
> > > [pavel@nemesis libvterm]$ pg_config --libdir
> > > /usr/local/pgsql/lib
> > > [pavel@nemesis libvterm]$ pg_config --sharedir
> > > /usr/local/pgsql/share
> > >
> > > if you have more versions on your comp, it is important what pg_config
> is
> > > executed - depends on PATH
> > >
> >
> > Thanks, yes turns out I had the wrong development package installed (I am
> > on Ubuntu). I removed the wrong one, and installed the correct one.
> >
> > BUT, now I am getting this error at the "make install" step:
> >
> > cd '/usr/lib/postgresql/11/lib/bitcode' && /usr/lib/llvm-6.0/bin/llvm-lto
> > -thinlto -thinlto-action=thinlink -o pg_libphonenumber.index.bc
> > pg_libphonenumber/src/error_handling.bc
> > pg_libphonenumber/src/pg_libphonenumber.bc
> > pg_libphonenumber/src/packed_phone_number.bc
> > error: can't create module summary index for buffer: Expected a single
> > module
> > LLVM ERROR: ThinLink didn't create an index
> > /usr/lib/postgresql/11/lib/pgxs/src/makefiles/pgxs.mk:229: recipe for
> > target 'install' failed
> > m
> >
> > Any thoughts on this?
> >
>
> I have not a idea what is it, but I found this message
>
> https://github.com/PierreSenellart/provsql/issues/9
>
> and there is a patch
> https://github.com/rdkit/rdkit/issues/2192#issuecomment-445579020
>
> Probably it is problem with some JIT feature flags on Postgres 11 and
> higher
>
> Pavel
>
> --
> > "They that would give up essential liberty for temporary safety deserve
> > neither liberty nor safety."
> > -- Benjamin Franklin
> >
>
> I did manage to get this package to compile and pass the minimal regression
> tests that are included with it.
>
> I would like to submit the modified Makefile that gets it to compile, but i
> have never done this with github. Can someone point me to a HOWTO to do
> this, please?
>
>
>


FW: Re: A question about building pg-libphonenumber

2019-10-22 Thread stan
- Forwarded message from Pavel Stehule  -

Date: Tue, 22 Oct 2019 14:43:17 +0200
From: Pavel Stehule 
To: stan 
Cc: pgsql-general 
Subject: Re: A question about building pg-libphonenumber
List-Id: 

??t 22. 10. 2019 v 14:37 odes??latel stan  napsal:

> On Tue, Oct 22, 2019 at 02:19:15PM +0200, Pavel Stehule wrote:
> > Hi
> >
> > ??t 22. 10. 2019 v 14:15 odes??latel stan  napsal:
> >
> > > OK, this seems a stupid question, but I do not see the answer.
> > >
> > >
> > > I downloaded pg-libphonenumber, and am going to try to build it for PG
> > > version 11. Looks like the last build was against version 9.6.
> > >
> > > It builds fine, the issue comes in when I try to do a make install. It
> > > wants to install in the appropriate directory tree for 9.6. Should be a
> > > simple thing to change, right? But greping around, I cannot seem to
> find
> > > where this is defend. it is NOT in the Makefile.
> > >
> > > Can someone tell me where to change this to try it against the version
> of
> > > the DB engine I am running?
> > >
> >
> > The build and installation is controlled by small application - pg_config
> >
> > [pavel@nemesis libvterm]$ pg_config --libdir
> > /usr/local/pgsql/lib
> > [pavel@nemesis libvterm]$ pg_config --sharedir
> > /usr/local/pgsql/share
> >
> > if you have more versions on your comp, it is important what pg_config is
> > executed - depends on PATH
> >
>
> Thanks, yes turns out I had the wrong development package installed (I am
> on Ubuntu). I removed the wrong one, and installed the correct one.
>
> BUT, now I am getting this error at the "make install" step:
>
> cd '/usr/lib/postgresql/11/lib/bitcode' && /usr/lib/llvm-6.0/bin/llvm-lto
> -thinlto -thinlto-action=thinlink -o pg_libphonenumber.index.bc
> pg_libphonenumber/src/error_handling.bc
> pg_libphonenumber/src/pg_libphonenumber.bc
> pg_libphonenumber/src/packed_phone_number.bc
> error: can't create module summary index for buffer: Expected a single
> module
> LLVM ERROR: ThinLink didn't create an index
> /usr/lib/postgresql/11/lib/pgxs/src/makefiles/pgxs.mk:229: recipe for
> target 'install' failed
> m
>
> Any thoughts on this?
>

I have not a idea what is it, but I found this message

https://github.com/PierreSenellart/provsql/issues/9

and there is a patch
https://github.com/rdkit/rdkit/issues/2192#issuecomment-445579020

Probably it is problem with some JIT feature flags on Postgres 11 and higher

Pavel

-- 
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>

I did manage to get this package to compile and pass the minimal regression
tests that are included with it. 

I would like to submit the modified Makefile that gets it to compile, but i
have never done this with github. Can someone point me to a HOWTO to do
this, please?




Re: pg_hba & ldap

2019-10-22 Thread Stephen Frost
Greetings,

* Diego (mrstephenam...@gmail.com) wrote:
> I have a problem with ldap authentication, I have a ldap string like this:
> 
> host all all 0.0.0.0/0 ldap ldapserver="10.20.90.251
> 10.20.90.252 10.10.90.251 10.10.90.252" ldapport=389...
> 
> It is correct? if the firs server is down, pg will go to the next one to
> continue authenticating?

Yes, that looks like it should work- is it not?

> It's a pg11 and ldap is an ipa server

Note that with an IPA setup, similar to if you were running Active
Directory, you have Kerberos and a KDC available, which is a much better
authentication mechanism that removes the need for the database sever to
reach out to another system to handle the authentication, and avoids
having the user's password sent to the database server.  You might want
to consider using that (which is called 'gssapi' in PostgreSQL, which is
basically generalized Kerberos) instead of LDAP.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: A question about building pg-libphonenumber

2019-10-22 Thread Pavel Stehule
út 22. 10. 2019 v 14:37 odesílatel stan  napsal:

> On Tue, Oct 22, 2019 at 02:19:15PM +0200, Pavel Stehule wrote:
> > Hi
> >
> > ??t 22. 10. 2019 v 14:15 odes??latel stan  napsal:
> >
> > > OK, this seems a stupid question, but I do not see the answer.
> > >
> > >
> > > I downloaded pg-libphonenumber, and am going to try to build it for PG
> > > version 11. Looks like the last build was against version 9.6.
> > >
> > > It builds fine, the issue comes in when I try to do a make install. It
> > > wants to install in the appropriate directory tree for 9.6. Should be a
> > > simple thing to change, right? But greping around, I cannot seem to
> find
> > > where this is defend. it is NOT in the Makefile.
> > >
> > > Can someone tell me where to change this to try it against the version
> of
> > > the DB engine I am running?
> > >
> >
> > The build and installation is controlled by small application - pg_config
> >
> > [pavel@nemesis libvterm]$ pg_config --libdir
> > /usr/local/pgsql/lib
> > [pavel@nemesis libvterm]$ pg_config --sharedir
> > /usr/local/pgsql/share
> >
> > if you have more versions on your comp, it is important what pg_config is
> > executed - depends on PATH
> >
>
> Thanks, yes turns out I had the wrong development package installed (I am
> on Ubuntu). I removed the wrong one, and installed the correct one.
>
> BUT, now I am getting this error at the "make install" step:
>
> cd '/usr/lib/postgresql/11/lib/bitcode' && /usr/lib/llvm-6.0/bin/llvm-lto
> -thinlto -thinlto-action=thinlink -o pg_libphonenumber.index.bc
> pg_libphonenumber/src/error_handling.bc
> pg_libphonenumber/src/pg_libphonenumber.bc
> pg_libphonenumber/src/packed_phone_number.bc
> error: can't create module summary index for buffer: Expected a single
> module
> LLVM ERROR: ThinLink didn't create an index
> /usr/lib/postgresql/11/lib/pgxs/src/makefiles/pgxs.mk:229: recipe for
> target 'install' failed
> m
>
> Any thoughts on this?
>

I have not a idea what is it, but I found this message

https://github.com/PierreSenellart/provsql/issues/9

and there is a patch
https://github.com/rdkit/rdkit/issues/2192#issuecomment-445579020

Probably it is problem with some JIT feature flags on Postgres 11 and higher

Pavel

-- 
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>


Re: A question about building pg-libphonenumber

2019-10-22 Thread stan
On Tue, Oct 22, 2019 at 02:19:15PM +0200, Pavel Stehule wrote:
> Hi
> 
> ??t 22. 10. 2019 v 14:15 odes??latel stan  napsal:
> 
> > OK, this seems a stupid question, but I do not see the answer.
> >
> >
> > I downloaded pg-libphonenumber, and am going to try to build it for PG
> > version 11. Looks like the last build was against version 9.6.
> >
> > It builds fine, the issue comes in when I try to do a make install. It
> > wants to install in the appropriate directory tree for 9.6. Should be a
> > simple thing to change, right? But greping around, I cannot seem to find
> > where this is defend. it is NOT in the Makefile.
> >
> > Can someone tell me where to change this to try it against the version of
> > the DB engine I am running?
> >
> 
> The build and installation is controlled by small application - pg_config
> 
> [pavel@nemesis libvterm]$ pg_config --libdir
> /usr/local/pgsql/lib
> [pavel@nemesis libvterm]$ pg_config --sharedir
> /usr/local/pgsql/share
> 
> if you have more versions on your comp, it is important what pg_config is
> executed - depends on PATH
> 

Thanks, yes turns out I had the wrong development package installed (I am
on Ubuntu). I removed the wrong one, and installed the correct one. 

BUT, now I am getting this error at the "make install" step:

cd '/usr/lib/postgresql/11/lib/bitcode' && /usr/lib/llvm-6.0/bin/llvm-lto
-thinlto -thinlto-action=thinlink -o pg_libphonenumber.index.bc
pg_libphonenumber/src/error_handling.bc
pg_libphonenumber/src/pg_libphonenumber.bc
pg_libphonenumber/src/packed_phone_number.bc
error: can't create module summary index for buffer: Expected a single
module
LLVM ERROR: ThinLink didn't create an index
/usr/lib/postgresql/11/lib/pgxs/src/makefiles/pgxs.mk:229: recipe for
target 'install' failed
m

Any thoughts on this?
-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: A question about building pg-libphonenumber

2019-10-22 Thread Pavel Stehule
Hi

út 22. 10. 2019 v 14:15 odesílatel stan  napsal:

> OK, this seems a stupid question, but I do not see the answer.
>
>
> I downloaded pg-libphonenumber, and am going to try to build it for PG
> version 11. Looks like the last build was against version 9.6.
>
> It builds fine, the issue comes in when I try to do a make install. It
> wants to install in the appropriate directory tree for 9.6. Should be a
> simple thing to change, right? But greping around, I cannot seem to find
> where this is defend. it is NOT in the Makefile.
>
> Can someone tell me where to change this to try it against the version of
> the DB engine I am running?
>

The build and installation is controlled by small application - pg_config

[pavel@nemesis libvterm]$ pg_config --libdir
/usr/local/pgsql/lib
[pavel@nemesis libvterm]$ pg_config --sharedir
/usr/local/pgsql/share

if you have more versions on your comp, it is important what pg_config is
executed - depends on PATH

[pavel@nemesis libvterm]$ pg_config --version
PostgreSQL 13devel

Regards

Pavel


>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>
>


A question about building pg-libphonenumber

2019-10-22 Thread stan
OK, this seems a stupid question, but I do not see the answer.


I downloaded pg-libphonenumber, and am going to try to build it for PG
version 11. Looks like the last build was against version 9.6. 

It builds fine, the issue comes in when I try to do a make install. It
wants to install in the appropriate directory tree for 9.6. Should be a
simple thing to change, right? But greping around, I cannot seem to find
where this is defend. it is NOT in the Makefile.

Can someone tell me where to change this to try it against the version of
the DB engine I am running?

-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




pg_hba & ldap

2019-10-22 Thread Diego

Hi all!

I have a problem with ldap authentication, I have a ldap string like this:

host all all 0.0.0.0/0 ldap 
ldapserver="10.20.90.251 10.20.90.252 10.10.90.251 10.10.90.252" 
ldapport=389...


It is correct? if the firs server is down, pg will go to the next one to 
continue authenticating?


It's a pg11 and ldap is an ipa server

thanks a lot,

Diego



A question about sequnces and pg_restore

2019-10-22 Thread stan
On Mon, Oct 21, 2019 at 05:24:37PM -0700, Adrian Klaver wrote:
> On 10/21/19 5:13 PM, stan wrote:
> > 
> > I typically design a system with primary keys defined, like this:
> > 
> > 
> > CREATE TABLE employee (
> > employee_key   integer DEFAULT nextval('employee_key_serial')
> > PRIMARY KEY ,
> > 
> > I use scripts to build the database structures and load the data. I am
> > careful to get the dependencies in the correct order, so that the keys later
> > structures depend on already exist.
> > 
> > Today I was going over the design for the current project with a friend,
> > whose expertise i respect. he said that he had issues on a system designed
> > by an OEM that had dependencies on keys developed from sequences after a
> > backup/restore cycle,
> > 
> > Will I potentially have these issues? If so, what can I do different to
> > avoid this being an issue?
> 
> It is not clear to me what you are doing:
> 
> 1) Are you using pg_dump/pg_restore to populate a database?
> If so it will take care of the dependencies.
> 
> 2) Are you using a home built method to populate the database?
> In that case you take responsibility for dependencies.
> 

OK, let me elaborate.

During the development/test cycle. I routinely drop all objects in the
database for the project we are in the process of working on. This is my
standard way of doing this, and, I understand the dependencies, and keep
them working correctly with the scripts that recreate the database. This
work fine, as long as all that is in the  database is test data.

At a point in time, we start putting real data in the db for continuing
testing, and later it becomes a production database with ONLY real data in
it.

At the 1st of those 2 points, we stop dropping/recreating the database. We
also start making backups using pg_dump at the 1st of these 2 points in
time.

My co-worker says that he had issues restoring a database designed by an
OEM because the design of that database depended on matching keys generated
by sequences on primary table with the keys stored in dependent tables. I
think I can see how this could happen if the keys assigned by the sequences
in the primary changes were recreated during the restore. This would
result in the keys restored in the dependent tables pointing to older, now
incorrect keys.


-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin






Re: Too many SET TimeZone and Application_name queries

2019-10-22 Thread Amarendra Konda
Hi Adrian,

Thanks a lot for the right pointer.  Setting -Duser.timezone=UTC has solved
the problem. Now, we don't see any more queries related to *SET TimeZone.*
Thanks again for your time and valuable suggestion.

@Jeff :  These queries were sent by the JDBC Driver latest changes, nothing
to do with the Tomcat server.  On test server, We were seeing around 45 K +
queries with very minimal load.

Regards, Amarendra


On Mon, Oct 14, 2019 at 7:45 PM Adrian Klaver 
wrote:

> On 10/13/19 10:24 PM, Amarendra Konda wrote:
> > Hi Adrian,
> >
> > Thanks a lot for the response.
> >
> > We are using JDBC Driver 42.2.8 along with the Tomcat Server on Java 8.
> > As part of application code, We are *_not_* setting timezone (or)
> > application names. One observation was, application was querying columns
> > of the datatype "timestamp without time zone" .
>
> Well something is explicitly setting the TimeZone. Per this:
>
>
> https://stackoverflow.com/questions/18447995/postgresql-9-2-jdbc-driver-uses-client-time-zone
>
> I would start with the JDBC driver. You might also try the Postgres JDBC
> list:
>
> https://www.postgresql.org/list/pgsql-jdbc/
>
>
> Re: application_name. I do not see SET for this when I connect using
> application_name as part of connection string:
>
> psql "host=localhost dbname=postgres user=postgres
> application_name=psql_client"
>
> [unknown]-[unknown]-2019-10-14 07:06:35.508 PDT-0LOG:  connection
> received: host=::1 port=46246
> [unknown]-postgres-2019-10-14 07:06:35.530 PDT-0LOG:  connection
> authorized: user=postgres database=postgres SSL enabled
> (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, bits=256,
> compression=off)
>
> So I believe this is being explicitly SET by something. Since
> 'PostgreSQL JDBC Driver' is the Postgres JDBC driver name I would start
> there.
>
> >
> > Regards, Amarendra
> >
> >
> > On Fri, Oct 11, 2019 at 7:33 PM Adrian Klaver  > > wrote:
> >
> > On 10/11/19 4:49 AM, Amarendra Konda wrote:
> >  > Hi,
> >  >
> >  > In our test environment, it was observed that there are too many
> > queries
> >  > were getting fired to the database server, even though they are
> > not part
> >  > of the SQL query execution.
> >  >
> >  > And the number of queries that were coming to server are very
> > high. Can
> >  > you please suggest on how to avoid these queries to the database
> > server ?
> >
> > My guess is your application server/framework is setting the below.
> > What are you using for above?
> >
> >  >
> >  >
> >  > 2019-10-10 13:37:25 UTC:172.31.77.194(36920):
> >  > user1@new_unity_green1:[2549]:LOG:  duration: 0.081 ms
> >   statement: *SET
> >  > application_name='PostgreSQL JDBC Driver';*
> >  > 2019-10-10 13:37:25 UTC:172.31.69.112(45682):
> >  > user1@new_unity_green0:[3545]:LOG:  duration: 0.036 ms
> >   statement: *SET
> >  > TimeZone='UTC';*
> >  > 2019-10-10 13:37:25
> >  > UTC:172.31.77.194(36902):user1@new_unity_green1:[2112]:LOG:
> >   duration:
> >  > 0.177 ms  statement: *SET TimeZone='Etc/UTC';SET
> >  > application_name='PostgreSQL JDBC Driver';*
> >  >
> >  >
> >  > *_Environment_*
> >  >
> >  >   * PGBouncer 1.9
> >  >   * JDBC Driver 42.2.8
> >  >   * Java 1.8
> >  >   * PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc
> > (GCC) 4.8.3
> >  > 20140911 (Red Hat 4.8.3-9), 64-bit
> >  >
> >  >
> >  > Application Server, pgBouncer and database server are all
> configured
> >  > with UTC only.
> >  >
> >  > =>show timezone;
> >  >   TimeZone
> >  > --
> >  >   UTC
> >  >
> >  > Thanks in advance,
> >  >
> >  > Regards, Amarendra
> >  >
> >  >
> >  >
> >  >
> >  >
> >  >
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com 
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


EXPLAIN BUFFERS and I/O timing accounting questions

2019-10-22 Thread Maciek Sakrejda
Hello,

I ran across an EXPLAIN plan and had some questions about some of its
details. The BUFFERS docs say

>The number of blocks shown for an upper-level node includes those used by
all its child nodes.

I initially assumed this would be cumulative, but I realized it's probably
not because some of the blocks affected by each child will actually
overlap. But this particular plan has a Shared Hit Blocks at the root (an
Aggregate) that is smaller than some of its children (three ModifyTables
and a CTE Scan). This seems to contradict the documentation (since if
children overlap fully in their buffers usage, the parent should still have
a cost equal to the costliest child)--any idea what's up? I can send the
whole plan (attached? inline? it's ~15kb) if that helps.

I also noticed the I/O Read Time (from track_io_timing) of two children in
this plan is equal to the I/O Read Time in the root. Is I/O time
potentially fully parallelized across children? There are no parallel
workers according to the plan, so I'm surprised at this and would like to
understand better.

Also, a tangential question: why is the top-level structure of a JSON plan
an array? I've only ever seen one root node with a Plan key there.

Thanks,
Maciek