Re: [GENERAL] How to stop array_to_json from interpolating column names that weren't there

2017-07-19 Thread David G. Johnston
On Wed, Jul 19, 2017 at 8:53 PM, Guyren Howe  wrote:

> Thanks. Seeking greater understanding, why is json_agg(*) not equivalent?
>

​Are you referring to the fact that ​this provokes an error?

"select json_agg(*) from schemata;"

The json_agg(expression) function has an arity of 1 (i.e., one input
argument only).  If you write:

select schemata from schemata;

you get one column in the output, while:

select * from schemata

results in an output relation having three columns - the "*" expands the
composite type in the FROM clause into its component columns in the
select-list

json_agg(*) fails since it is not expecting 3 columns (though oddly the
error I see in 9.6 indicates its actually looking for a 0-arity function...
"function json_agg() does not exist" - which implies that generally one
cannot place a * in a function call, even if it would result in a single
column.  This may even be documented but I haven't the desire to look right
now...)

David J.


Re: [GENERAL] How to stop array_to_json from interpolating column names that weren't there

2017-07-19 Thread Guyren Howe
On Jul 19, 2017, at 20:47 , Paul Jungwirth  
wrote:
> 
>> which is great. I have an array of perfect JSON objects. Now I just need
>> to turn that into a single JSON object.
> 
> I think you're saying you want it as a single JSON *array*, right? An object 
> of objects doesn't make sense. Assuming that's right, this seems to work:
> 
> db1=# select json_agg(schemata) from schemata;
> json_agg 
> --
> [{“catalog_name":"db1","schema_name":"information_schema","schema_name_address":"/information_schema"},

Thanks. Seeking greater understanding, why is json_agg(*) not equivalent?



Re: [GENERAL] How to stop array_to_json from interpolating column names that weren't there

2017-07-19 Thread Paul Jungwirth

which is great. I have an array of perfect JSON objects. Now I just need
to turn that into a single JSON object.


I think you're saying you want it as a single JSON *array*, right? An 
object of objects doesn't make sense. Assuming that's right, this seems 
to work:


db1=# select json_agg(schemata) from schemata;
 json_agg 


--

[{"catalog_name":"db1","schema_name":"information_schema","schema_name_address":"/information_schema"},
{"catalog_name":"db1","schema_name":"pg_catalog","schema_name_address":"/pg_catalog"},
{"catalog_name":"db1","schema_name":"pg_temp_1","schema_name_address":"/pg_temp_1"},
{"catalog_name":"db1","schema_name":"pg_toast","schema_name_address":"/pg_toast"},
{"catalog_name":"db1","schema_name":"pg_toast_temp_1","schema_name_address":"/pg_toast_temp_1"},
{"catalog_name":"db1","schema_name":"public","schema_name_address":"/public"}]
(1 row)

Paul


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2017-07-19 Thread Igor Korot
Hi, ALL

On Thu, May 11, 2017 at 6:47 PM, Adrian Klaver
 wrote:
> On 05/11/2017 07:26 AM, Igor Korot wrote:
>>
>> Adrian et al,
>>
>> On Thu, May 11, 2017 at 9:54 AM, Adrian Klaver
>>  wrote:
>>>
>>> On 05/11/2017 06:24 AM, Igor Korot wrote:
>
>
>> Thank you.
>> Will take a look and modify to use in my program.
>>
>> I presume I'm allowed to do that, right?
>
>
> Yes.

I'm posting this hoping that I will save some time to someone else.

Here is the query I came up with:

SELECT DISTINCT ordinal_position, column_name, data_type,
character_maximum_length, character_octet_length, numeric_precision,
numeric_precision_radix, numeric_scale, is_nullable, column_default,
CASE WHEN column_name IN (SELECT ccu.column_name FROM
information_schema.constraint_column_usage ccu,
information_schema.table_constraints tc WHERE ccu.constraint_name =
tc.constraint_name AND tc.constraint_type = 'PRIMARY KEY' AND
ccu.table_name = $2) THEN 'YES' ELSE 'NO' END AS is_pk  FROM
information_schema.columns col, information_schema.table_constraints
tc WHERE tc.table_schema = col.table_schema AND tc.table_name =
col.table_name AND col.table_schema = $1 AND col.table_name = $2 ORDER
BY ordinal_position;

Is there a better way to do that? I don't mind using Postgres-only solution.

Thank you.

BTW, is there a difference between query a real tables or query a view?

>
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to stop array_to_json from interpolating column names that weren't there

2017-07-19 Thread Tom Lane
Guyren Howe  writes:
> I have an array of perfect JSON objects. Now I just need to turn that into a 
> single JSON object. But when I do the obvious:

> SELECT array_to_json(
>   SELECT
>   ARRAY_AGG(foo) AS foos
>   FROM (
>   SELECT
>   row_to_json(schemata_)
>   FROM
>   schemata_)
>   AS
>   foo
> )

> I get a syntax error.

Well, yeah, you're missing parentheses around the scalar sub-select.
(Whether fixing that will give the behavior you want is unclear,
but the syntax error is clear.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to define a constraint based on the values in other rows in the current table?

2017-07-19 Thread Glen Huang


> On 20 Jul 2017, at 5:26 AM, Hannes Erven  wrote:
> 
> Hi Glen,
> 
> 
>> I'd like to enforce that in a transaction, after a couple inserts &  > 
>> updates, a particular column has continuous values like 1, 2, 3, and 
> > never any gaps. Is it possible to do?> > I gave a concrete example here: > 
> > https://stackoverflow.com/questions/45187113 didn't get any answers yet.
> I've posted a comment to SO; basically what I'm saying is yes, use a 
> transaction constraint trigger to enforce your requirement.
> 
> This previous question has an example of such a trigger:
> 
> https://stackoverflow.com/a/37277421/1980653
> 
> 
> HTH & best regards
> 
>   -hannes

Using a trigger sounds like the right approach. I’ll try it out. Thanks guys.

Hannes, thanks for posting an answer to SO, I’ve accepted it. :)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to stop array_to_json from interpolating column names that weren't there

2017-07-19 Thread Guyren Howe
9.6

I’ve a view that shows information about schemas:

 SELECT schemata.catalog_name,
schemata.schema_name,
('/'::text || (schemata.schema_name)::text) AS schema_name_address
   FROM information_schema.schemata
  ORDER BY schemata.catalog_name, schemata.schema_name

Fine. I now want to turn that result set into a JSON array. I can do this:

SELECT
ARRAY_AGG(foo) AS foos
FROM (
SELECT
row_to_json(schemata_)
FROM
schemata_)
AS
foo
and I get this:

{“(\”{\"\"catalog_name\"\":\"\"ds2_development\"\",\"\"schema_name\"\":\"\"admin\"\",\"\"schema_name_address\"\":\"\"/admin\"\"}\")","(\"
…

which is great. I have an array of perfect JSON objects. Now I just need to 
turn that into a single JSON object. But when I do the obvious:

SELECT array_to_json(
SELECT
ARRAY_AGG(foo) AS foos
FROM (
SELECT
row_to_json(schemata_)
FROM
schemata_)
AS
foo
)

I get a syntax error. And when I do:

SELECT TO_JSON(foos) FROM (
SELECT
ARRAY_AGG(foo) AS foos
FROM (
SELECT
row_to_json(schemata_)
FROM
schemata_)
AS
foo
) AS bar

Postgres tries to be helpful by interpolating a column name I don’t want (here, 
row_to_json):

[{“row_to_json":{"catalog_name":"ds2_development","schema_name":"admin","schema_name_address":"/admin"}},{"row_to_json":{"catalog_name":"ds2_development","schema_name":"anon","schema_name_address":"/anon"}},{"row_to_json":
…

I could fix this in PLV8 a tad inefficiently, but I feel like I ought to be 
able to do this in pure SQL and there’s some heuristic or Yet Another Weird 
Corner Case I’m as yet unaware of.

So: can I do this in SQL? How?


Re: [GENERAL] Support for \u0000?

2017-07-19 Thread Tom Lane
Matthew Byrne  writes:
> Would a more feasible approach be to introduce new types (say, TEXT2 and
> JSONB2 - or something better-sounding) which are the same as the old ones
> but add for support \u and UTF 0?  This would isolate nul-containing
> byte arrays to the implementations of those types and keep backward
> compatibility by leaving TEXT and JSONB alone.

The problem is not inside those datatypes; either text or jsonb could
trivially store \0 bytes.  The problem is passing such values through
APIs that don't support it.  Changing those APIs would affect *all*
datatypes.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE column without FK fires other FK triggers constraint check

2017-07-19 Thread Tom Lane
Luca Looz  writes:
> Thanks for the explanation!
> Can these checks be implemented or the data needed is not there and adding
> it will only add an overhead for the majority of use cases?

It's hard to see how to do much better than we're doing without storing
more data on-disk than is there now.  Whether that would be a good
tradeoff is dubious.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would log_lock_waits affect a query plan?

2017-07-19 Thread Evan Martin

On 19/07/2017 11:52 PM, Tom Lane wrote:

Evan Martin  writes:

I have an application that imports a lot of data and the does some
queries on it to build some caches in the database, all in one long
transaction. One of those cache updates repeatedly calls a plpgsql
function, which internally does some SQL queries. Sometimes this is
much, much slower than usual: 3-7 hours instead of 12-15 minutes. It was
totally reproducible when it happened, though (running on the same
machine, same input data).
It turns out that the problem only happens when the "log_lock_waits"
setting was OFF! Many machines had it ON (to troubleshoot a different
problem), so they never experienced it.
I eventually tracked it down to the query plan chosen for one particular
query in the plpgsql function: using a Nested Loop makes it fast and
using a Hash Join makes it very slow. Running an ANALYZE on one of the
tables involved fixes the problem - the fast query plan is chosen all
the time. This itself is a bit strange, because I was already running
ANALYZE on all tables after the data import - it seems that I needed to
run it a second time?

Are you using the problematic function earlier in the process?  I wonder
if it's cached a bad plan that dates from when there was much less data
in the table.  I also wonder if maybe the specific table is being updated
after the ANALYZEs.  In the situation you've got here, you can't expect
any help from auto-analyze; only your own manual ANALYZEs are going to be
able to see the uncommitted data in the tables.
No, that function is not used earlier and the table is not updated after 
the ANALYZE. I've also tried running DISCARD PLANS right after the 
ANALYZE, which made no difference.

All the data import is done first, then ANALYZE, then the cache updates.

But what I'd really like to understand is: why did
setting log_lock_waits to ON always change the query plan to use a
Nested Loop? It's just not something I'd ever expect to affect a query plan.

TBH, I don't believe it.  There are a lot of moving parts here, but
I don't see how that could be relevant.
I agree, it's totally bizarre, which is why I'm posting here, but I've 
tried it many times now and the results are very clear (at least on my 
machine, yet to confirm on others). I change nothing, except 
log_lock_waits, re-import the same data and see different query plans 
and very different speeds for that function. (For testing purposes I 
don't commit the transaction at the end and just start with a clean DB 
each time.)



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would log_lock_waits affect a query plan?

2017-07-19 Thread Tom Lane
Evan Martin  writes:
> I have an application that imports a lot of data and the does some 
> queries on it to build some caches in the database, all in one long 
> transaction. One of those cache updates repeatedly calls a plpgsql 
> function, which internally does some SQL queries. Sometimes this is 
> much, much slower than usual: 3-7 hours instead of 12-15 minutes. It was 
> totally reproducible when it happened, though (running on the same 
> machine, same input data).

> It turns out that the problem only happens when the "log_lock_waits" 
> setting was OFF! Many machines had it ON (to troubleshoot a different 
> problem), so they never experienced it.

> I eventually tracked it down to the query plan chosen for one particular 
> query in the plpgsql function: using a Nested Loop makes it fast and 
> using a Hash Join makes it very slow. Running an ANALYZE on one of the 
> tables involved fixes the problem - the fast query plan is chosen all 
> the time. This itself is a bit strange, because I was already running 
> ANALYZE on all tables after the data import - it seems that I needed to 
> run it a second time?

Are you using the problematic function earlier in the process?  I wonder
if it's cached a bad plan that dates from when there was much less data
in the table.  I also wonder if maybe the specific table is being updated
after the ANALYZEs.  In the situation you've got here, you can't expect
any help from auto-analyze; only your own manual ANALYZEs are going to be
able to see the uncommitted data in the tables.

> But what I'd really like to understand is: why did 
> setting log_lock_waits to ON always change the query plan to use a 
> Nested Loop? It's just not something I'd ever expect to affect a query plan.

TBH, I don't believe it.  There are a lot of moving parts here, but
I don't see how that could be relevant.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why would log_lock_waits affect a query plan?

2017-07-19 Thread Evan Martin
I have an application that imports a lot of data and the does some 
queries on it to build some caches in the database, all in one long 
transaction. One of those cache updates repeatedly calls a plpgsql 
function, which internally does some SQL queries. Sometimes this is 
much, much slower than usual: 3-7 hours instead of 12-15 minutes. It was 
totally reproducible when it happened, though (running on the same 
machine, same input data).


It turns out that the problem only happens when the "log_lock_waits" 
setting was OFF! Many machines had it ON (to troubleshoot a different 
problem), so they never experienced it.


I eventually tracked it down to the query plan chosen for one particular 
query in the plpgsql function: using a Nested Loop makes it fast and 
using a Hash Join makes it very slow. Running an ANALYZE on one of the 
tables involved fixes the problem - the fast query plan is chosen all 
the time. This itself is a bit strange, because I was already running 
ANALYZE on all tables after the data import - it seems that I needed to 
run it a second time? But what I'd really like to understand is: why did 
setting log_lock_waits to ON always change the query plan to use a 
Nested Loop? It's just not something I'd ever expect to affect a query plan.


By the way, I also found that the problem does not occur if I commit 
before the cache updates. This was with PostgreSQL 9.6.3 running on 
Windows x64, if that matters.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ALTER COLUMN, trading a DOMAIN for its base datatype

2017-07-19 Thread Tom Lane
C GG  writes:
> ... Is PostgreSQL smart enough to not have to rewrite the table and simply
> shed the domain for the underlying datatype?

Yes, in recent versions ... don't remember how far back exactly.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ALTER COLUMN, trading a DOMAIN for its base datatype

2017-07-19 Thread C GG
I'm (still) trying to transition out from under the uniqueidentifier
contrib module to the native uuid datatype. I seem to be able to create a
domain for uniqueidentifier:

CREATE DOMAIN public.uniqueidentifier AS uuid;

and then use pg_restore to reload my data without the contrib module.

I'd like for my final step to be to

ALTER TABLE foo ALTER COLUMN my_uuid TYPE uuid;

... Is PostgreSQL smart enough to not have to rewrite the table and simply
shed the domain for the underlying datatype?


Re: [GENERAL] ~/.psqlrc file is ignored

2017-07-19 Thread David G. Johnston
On Wed, Jul 19, 2017 at 12:12 PM, vstuart  wrote:

> Hi David: I see what you are saying; sorry for the confusion. This is how
> postgres operates on my system:
>

​None of that is surprising or unique.  If you ask specific questions I'd
be willing to answer them but I'm not going to explain everything that is
going on line-by-line.

David J.
​


Re: [GENERAL] ~/.psqlrc file is ignored

2017-07-19 Thread vstuart
Hi David: I see what you are saying; sorry for the confusion. This is how 
postgres operates on my system:

[victoria@victoria ~]$ echo $HOME
  /home/victoria

[victoria@victoria ~]$ which postgres
  /usr/bin/postgres

[victoria@victoria ~]$ postgres
  postgres does not know where to find the server configuration file.
  You must specify the --config-file or -D invocation option or set the PGDATA 
environment variable.

[victoria@victoria ~]$ psql
  psql: FATAL:  database "victoria" does not exist

[victoria@victoria ~]$ sudo -u postgres -i

[postgres@victoria ~]$ echo $HOME
  /var/lib/postgres

[postgres@victoria ~]$ psql
  psql (9.6.3)
  Type "help" for help.

postgres=# \q

[postgres@victoria ~]$ exit
  logout

[victoria@victoria ~]$

... That postgres had a different $HOME environment than mine was not apparent 
to me, when I posted this question.

> What does "your" ~/.psqlrc have to do with any of this?  If you are
> executing psql while masquarding at the postgres user its the postgres
> user's​ ~/.psqlrc file that will be looked for.

> Typically one doesn't execute psql as the postgres user.  They configure
> the system so that they can execute psql as a regular user.  You should
> probably do that and then everything should work like you are thinking.




--
View this message in context: 
http://www.postgresql-archive.org/psqlrc-file-is-ignored-tp5971773p5971984.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] ~/.psqlrc file is ignored

2017-07-19 Thread David G. Johnston
On Wed, Jul 19, 2017 at 11:49 AM, vstuart  wrote:

> As a simple solution, I can sudo symlink MY ~/.psqlrc to that directory
> (/var/...; changing ownership also to postgres), but there appears to be
> some underlying issue, as Pg should find ~/.psqlrc, correct?
>

​What does "your" ~/.psqlrc have to do with any of this?  If you are
executing psql while masquarding at the postgres user its the postgres
user's​ ~/.psqlrc file that will be looked for.

Typically one doesn't execute psql as the postgres user.  They configure
the system so that they can execute psql as a regular user.  You should
probably do that and then everything should work like you are thinking.

David J.


Re: [GENERAL] ~/.psqlrc file is ignored

2017-07-19 Thread vstuart
Hi Thom: thank you for your reply; I can't figure this one out!  Per your reply:

[victoria@victoria ~]$ pg
  [sudo -u postgres -i]
  [sudo] password for victoria:

[postgres@victoria ~]$ psql -a
  psql (9.6.3)
  Type "help" for help.

postgres=# \q

[postgres@victoria ~]$ psql -af ~/.psqlrc
  /var/lib/postgres/.psqlrc: No such file or directory

[postgres@victoria ~]$ exit
  logout

[victoria@victoria ~]$

So, it appears that postgres is looking for the .psqlrc file there (does not 
exist), but ignoring my ~/.psqlrc file, right?

Edit: from that, it appears that postgres thinks that my home directory is 
/var/lib/postgres/.  ... Confirmed:

[postgres@victoria ~]$ echo $HOME
  /var/lib/postgres

As a simple solution, I can sudo symlink MY ~/.psqlrc to that directory 
(/var/...; changing ownership also to postgres), but there appears to be some 
underlying issue, as Pg should find ~/.psqlrc, correct?

FYI, I installed postgresql (Arch Linux) using the "pacman" package manager; 
the only significant change I made is to define a custom data directory, on my 
home partition.


From: "Thom Brown-2 [via PostgreSQL]" 
Subject: Re: ~/.psqlrc file is ignored
Date: Wed, 19 Jul 2017 11:31:49 -0700 (MST)

Do you get anything with "psql -a"?

If not, what do you get when you use "psql -af ~/.psqlrc" ?

Thom




--
View this message in context: 
http://www.postgresql-archive.org/psqlrc-file-is-ignored-tp5971773p5971973.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] UPDATE column without FK fires other FK triggers constraint check

2017-07-19 Thread Tom Lane
Luca Looz  writes:
> After some tests it seems that this happens when the same row is covered by
> more than 1 update in the same transaction even without any change.
> Is this an expected behavior? Why it happens?

Yes, see comment in RI_FKey_fk_upd_check_required:

 * If the original row was inserted by our own transaction, we
 * must fire the trigger whether or not the keys are equal.  This
 * is because our UPDATE will invalidate the INSERT so that the
 * INSERT RI trigger will not do anything; so we had better do the
 * UPDATE check.  (We could skip this if we knew the INSERT
 * trigger already fired, but there is no easy way to know that.)

Although this is talking about the BEGIN; INSERT; UPDATE; COMMIT case,
the code has no way to tell that apart from BEGIN; UPDATE; UPDATE; COMMIT.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ~/.psqlrc file is ignored

2017-07-19 Thread Thom Brown
On 18 July 2017 at 19:02, vstuart  wrote:
> My ~/.psqlrc file is ignored by my PostgreSQL installation (v.9.6.3; Arch
> Linux x86_64 platform).
>
> Suggestions?

Do you get anything with "psql -a"?

If not, what do you get when you use "psql -af ~/.psqlrc" ?

Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Two-phase commit case studies

2017-07-19 Thread Jerry Sievers
Gavin Wahl  writes:

> Hello,
>
> I've read the documentation for two-phase commit and it sounds interesting, 
> but
> I'm having trouble thinking of how to actually put it into production. How are
> people using it? Do you use a XA transaction manager or something custom 
> built?
> How dos the reconciliation work when a component crashes between PREPARE and
> COMMIT? Is anyone using it in the open-source world where transaction managers
> are unheard of?
>
> Looking forward to hearing about how two-phase commit has helped you.

I use it for holding an advisory lock the duration of any of our several
environment mangling frameworks.

We have fairly complex 1-touch provisioning, decom,
upgrade... rename/relocate/snapshot frameworks.

The lock is used to enforce cooperatively that no one else launches any
of those systems concurrently due to some interesting potential
conflicts.

>
> Thanks

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Planner statistics usage for composite type

2017-07-19 Thread dilaz03 .
Hello.

I am trying to find workaround for cross-column statistics. For example, I
have tags with similarity:
select version();
  version
---
 PostgreSQL 10beta2 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit

CREATE TABLE tags(
   t varchar(30) NOT NULL,
   s double precision
);
INSERT INTO tags (SELECT 'tag1'::text as t, 0.7 as s from
generate_series(0, 1));

I think i can create index for cross-column statistics:
CREATE TYPE tag_sim AS (
   t varchar(30),
   s double precision
);
CREATE INDEX tags_composite ON tags USING btree ((ROW(t, s)::tag_sim));
ANALYZE tags;
SELECT * FROM pg_stats WHERE tablename = 'tags_composite';
-[ RECORD 1 ]--+---
schemaname | public
tablename  | tags_composite
attname| row
inherited  | f
null_frac  | 0
avg_width  | 40
n_distinct | 1
most_common_vals   | {"(tag1,0.7)"}
most_common_freqs  | {1}
histogram_bounds   | (null)
correlation| 1
most_common_elems  | (null)
most_common_elem_freqs | (null)
elem_count_histogram   | (null)

OK, I have statistics. Search:
EXPLAIN SELECT * FROM tags WHERE ROW(t, s)::tag_sim >= ROW('tag1',
0.9)::tag_sim AND
ROW(t, s)::tag_sim <= ROW('tag1', 1.0)::tag_sim;
  QUERY PLAN
---
 Index Scan using tags_composite on tags  (cost=0.29..9.29 rows=50 width=13)
   Index Cond: ((ROW(t, s)::tag_sim >= ROW('tag1', 0.9)) AND (ROW(t,
s)::tag_sim <= ROW('tag1', 1.0)))
(2 rows)

Worn estimate. Planner doesn't use statistics. In code I see usage of
function scalargtsel which returns default selectivity because ROW('tag1',
0.9)::tag_sim is not Const.

May be someone known how to fix this issue?

Thanks.


Re: [GENERAL] Support for \u0000?

2017-07-19 Thread Tom Lane
Matthew Byrne  writes:
> Are there any plans to support \u in JSONB and, relatedly, UTF code
> point 0 in TEXT?

No.  It's basically never going to happen because of the widespread use
of C strings (nul-terminated strings) inside the backend.  Making \0 a
legal member of strings would break all those internal APIs, requiring
touching far more code than anyone would want to do.  It'd likely break
a great deal of client-side code as well.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ~/.psqlrc file is ignored

2017-07-19 Thread vstuart
My ~/.psqlrc file is ignored by my PostgreSQL installation (v.9.6.3; Arch
Linux x86_64 platform). 

Suggestions?



--
View this message in context: 
http://www.postgresql-archive.org/psqlrc-file-is-ignored-tp5971773.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Manage slot in logical/pglogical replication

2017-07-19 Thread dpat
yes i have estimated pg_xlog but server, some time, writes a lot of WAL file.
there are some robust alternative to replicate partial database in
Postgresql over WAN?
or, what's the best way to realign pglogical replica? pg_dump/restore?.
i have to find the best way to replicate only some table over internet to
remote database.
thanks



--
View this message in context: 
http://www.postgresql-archive.org/Manage-slot-in-logical-pglogical-replication-tp5970961p5971578.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] log_filename

2017-07-19 Thread basti
I want to create a generic own_postgres.conf and include that in
postgres.conf

I ask a bit wrong. Is there a variable to include postgres version on
log_filename?

Best regards
basti

Am 18.07.2017 um 10:40 schrieb Achilleas Mantzios:
> On 18/07/2017 11:36, Walter Nordmann wrote:
>> May be i'm blind: what did you really do?  changed postgresql.conf?
>>
> He hardcoded it in the postgresql.conf .
>> regards
>> walter
>>
>>
>> Am 18.07.2017 um 10:31 schrieb Ahtesham Karajgi:
>>> Add the version in the log_filename itself.
>>>
>>> Below is the test case.
>>>
>>> naveed=# show log_filename ;
>>>   log_filename
>>> 
>>>  postgresql-%Y-%m-%d_%H%M%S.log
>>> (1 row)
>>>
>>> naveed=# select pg_reload_conf();
>>>  pg_reload_conf
>>> 
>>>  t
>>> (1 row)
>>>
>>> naveed=# show log_filename ;
>>> log_filename
>>> 
>>>  postgresql-9.5-%Y-%m-%d_%H%M%S.log
>>> (1 row)
>>>
>>> test=# \q
>>> [postgres@localhost ~]$
>>> [postgres@localhost ~]$ ls -lrth /usr/local/pgsql/data/pg_log | tail -2
>>> -rw---. 1 postgres postgres 3.3K Jul 18 01:25
>>> postgresql-2017-07-18_00.log
>>> -rw---. 1 postgres postgres 4.1K Jul 18 01:27
>>> postgresql-9.5-2017-07-18_012530.log
>>> [postgres@localhost ~]$
>>>
>>
>>
> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] UPDATE column without FK fires other FK triggers constraint check

2017-07-19 Thread Luca Looz
I was analyzing an update function and i have noticed "Trigger for
constraint" entries for foreign keys that i wasn't using or referring.
After some tests it seems that this happens when the same row is covered by
more than 1 update in the same transaction even without any change.

Here is a dbfiddle example:
http://dbfiddle.uk/?rdbms=postgres_9.6=368289a7338031b8a7b7a90f0fd25d7c

Is this an expected behavior? Why it happens?

I have initially asked this on dba stackexchange:
https://dba.stackexchange.com/questions/180012/


[GENERAL] Support for \u0000?

2017-07-19 Thread Matthew Byrne
Are there any plans to support \u in JSONB and, relatedly, UTF code
point 0 in TEXT?  To the best of my knowledge \u is valid in JSON and
code point 0 is valid in UTF-8 but Postgres rejects both, which severely
limits its usefulness in many cases.

I am currently working around the issue by using the JSON type, which
allows \u to be stored, but this is far from ideal because it can't be
cast to TEXT or JSONB and can't even be accessed:

mydb=# select '{"thing":"\u"}'::json->>'thing';
ERROR:  unsupported Unicode escape sequence
DETAIL:  \u cannot be converted to text.
CONTEXT:  JSON data, line 1: {"thing":...

Regards,

Matt


Re: [GENERAL] Two-phase commit case studies

2017-07-19 Thread Michael Paquier
On Wed, Jul 19, 2017 at 7:10 PM, Gavin Wahl  wrote:
> I've read the documentation for two-phase commit and it sounds interesting, 
> but
> I'm having trouble thinking of how to actually put it into production. How are
> people using it? Do you use a XA transaction manager or something custom 
> built?
> How dos the reconciliation work when a component crashes between PREPARE and
> COMMIT? Is anyone using it in the open-source world where transaction managers
> are unheard of?
>
> Looking forward to hearing about how two-phase commit has helped you.

One direct use of 2PC, which is for what it has been originally
designed, is to ensure the consistency of a transaction commit across
multiple servers. One thing using extensively 2PC is for example
Postgres-XL (formerly Postgres-XC that introduced the concept), to
commit a transaction across nodes when a transaction involves writes
to multiple nodes.

Postgres JDBC has XA support by the way:
https://jdbc.postgresql.org/documentation/faq.html#xa-support
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Two-phase commit case studies

2017-07-19 Thread Gavin Wahl
Hello,

I've read the documentation for two-phase commit and it sounds interesting, but
I'm having trouble thinking of how to actually put it into production. How are
people using it? Do you use a XA transaction manager or something custom built?
How dos the reconciliation work when a component crashes between PREPARE and
COMMIT? Is anyone using it in the open-source world where transaction managers
are unheard of?

Looking forward to hearing about how two-phase commit has helped you.

Thanks


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Planner statistics usage for composite type

2017-07-19 Thread Tom Lane
Stephen Frost  writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> Yeah, that's because eval_const_expressions doesn't know how to fold
>> a constant RowExpr to a simple Const.  I have a patch laying about
>> someplace to improve that, but I keep forgetting about it until
>> we're in beta phase :-(.  In the meantime you could perhaps do the
>> folding by hand:

> Put it into the CF..?

Actually, I did already:
https://commitfest.postgresql.org/14/1136/

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Planner statistics usage for composite type

2017-07-19 Thread Stephen Frost
Tom,

* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Dmitry Lazurkin  writes:
> > I am trying to find workaround for cross-column statistics.
> > ...
> > Worn estimate. Planner doesn't use statistics. In code I see usage of
> > function scalargtsel which returns default selectivity because
> > ROW('tag1', 0.9)::tag_sim is not Const.
> 
> Yeah, that's because eval_const_expressions doesn't know how to fold
> a constant RowExpr to a simple Const.  I have a patch laying about
> someplace to improve that, but I keep forgetting about it until
> we're in beta phase :-(.  In the meantime you could perhaps do the
> folding by hand:

Put it into the CF..?

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Planner statistics usage for composite type

2017-07-19 Thread Tom Lane
Dmitry Lazurkin  writes:
> I am trying to find workaround for cross-column statistics.
> ...
> Worn estimate. Planner doesn't use statistics. In code I see usage of
> function scalargtsel which returns default selectivity because
> ROW('tag1', 0.9)::tag_sim is not Const.

Yeah, that's because eval_const_expressions doesn't know how to fold
a constant RowExpr to a simple Const.  I have a patch laying about
someplace to improve that, but I keep forgetting about it until
we're in beta phase :-(.  In the meantime you could perhaps do the
folding by hand:

regression=# EXPLAIN ANALYZE SELECT * FROM tags WHERE ROW(t, s)::tag_sim >= 
'(tag1,0.9)'::tag_sim AND
ROW(t, s)::tag_sim <= '(tag1,1.0)'::tag_sim;
  QUERY PLAN
  
--
 Index Scan using tags_composite on tags  (cost=0.29..8.30 rows=1 width=13) 
(actual time=0.051..0.051 rows=0 loops=1)
   Index Cond: ((ROW(t, s)::tag_sim >= '(tag1,0.9)'::tag_sim) AND (ROW(t, 
s)::tag_sim <= '(tag1,1)'::tag_sim))
 Planning time: 0.230 ms
 Execution time: 0.110 ms
(4 rows)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Planner statistics usage for composite type

2017-07-19 Thread Dmitry Lazurkin
Hello.

I am trying to find workaround for cross-column statistics. For example,
I have tags with similarity:
select version();
  version
---
 PostgreSQL 10beta2 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit

CREATE TABLE tags(
   t varchar(30) NOT NULL,
   s double precision
);
INSERT INTO tags (SELECT 'tag1'::text as t, 0.7 as s from
generate_series(0, 1));

I think i can create index for cross-column statistics:
CREATE TYPE tag_sim AS (
   t varchar(30),
   s double precision
);
CREATE INDEX tags_composite ON tags USING btree ((ROW(t, s)::tag_sim));
ANALYZE tags;
SELECT * FROM pg_stats WHERE tablename = 'tags_composite';
-[ RECORD 1 ]--+---
schemaname | public
tablename  | tags_composite
attname| row
inherited  | f
null_frac  | 0
avg_width  | 40
n_distinct | 1
most_common_vals   | {"(tag1,0.7)"}
most_common_freqs  | {1}
histogram_bounds   | (null)
correlation| 1
most_common_elems  | (null)
most_common_elem_freqs | (null)
elem_count_histogram   | (null)

OK, I have statistics. Search:
EXPLAIN SELECT * FROM tags WHERE ROW(t, s)::tag_sim >= ROW('tag1',
0.9)::tag_sim AND
ROW(t, s)::tag_sim <= ROW('tag1', 1.0)::tag_sim;
  QUERY PLAN
---
 Index Scan using tags_composite on tags  (cost=0.29..9.29 rows=50 width=13)
   Index Cond: ((ROW(t, s)::tag_sim >= ROW('tag1', 0.9)) AND (ROW(t,
s)::tag_sim <= ROW('tag1', 1.0)))
(2 rows)

Worn estimate. Planner doesn't use statistics. In code I see usage of
function scalargtsel which returns default selectivity because
ROW('tag1', 0.9)::tag_sim is not Const.

May be someone known how to fix this issue?

Thanks.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it possible to define a constraint based on the values in other rows in the current table?

2017-07-19 Thread vinny

On 2017-07-19 13:37, Glen Huang wrote:

Hi,

I'd like to enforce that in a transaction, after a couple inserts &
updates, a particular column has continuous values like 1, 2, 3, and
never any gaps. Is it possible to do?

I gave a concrete example here:
https://stackoverflow.com/questions/45187113 didn't get any answers
yet.

Am I looking at the wrong direction? Should such feature be
implemented with constraints?

Thanks.


If the value is only used for sorting then the exact value doesn't 
matter,
only that there are nu duplicates. 4,5,6 orders the exact same way as 
1,2,3 or 500,540,615


You are guaranteed to get gaps anyway when you remove a record.

Personally I'd sooner create a trigger that generates a new value on 
INSERT, and that

(if you *really* feel a need to) can fix gaps on DELETE and UPDATE


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is it possible to define a constraint based on the values in other rows in the current table?

2017-07-19 Thread Glen Huang
Hi,

I'd like to enforce that in a transaction, after a couple inserts & updates, a 
particular column has continuous values like 1, 2, 3, and never any gaps. Is it 
possible to do?

I gave a concrete example here: https://stackoverflow.com/questions/45187113 
 didn't get any answers yet.

Am I looking at the wrong direction? Should such feature be implemented with 
constraints?

Thanks.