User-defined function with anyrange[]

2018-07-05 Thread Paul A Jungwirth
Hello,

I want to make an aggregate function range_agg(anyrange) that returns
anyrange[]. But when I try to define it, Postgres tells me it doesn't
know what an anyrange[] is. I get this error:

ERROR:  type anyrange[] does not exist

I also tried taking an anyrange and returning an anyarray, which does
let me define the function, but upon calling it I get an error. For
example:

paul=# CREATE OR REPLACE FUNCTION range_agg4(anyrange)
RETURNS anyarray
AS $$
BEGIN
  RETURN ARRAY[$1];
END;
$$
LANGUAGE plpgsql;

paul=# select range_agg4(daterange('2016-05-04', '2016-07-01'));
ERROR:  invalid input syntax for type date: "[2016-05-04,2016-07-01)"
CONTEXT:  PL/pgSQL function range_agg4(anyrange) while casting
return value to function's return type

So I guess it thinks that daterange in means date[] out (not daterange[] out).

The best I can think of is to define range_agg(anyelement) that
returns anyarray. That actually works how I hope:

paul=# CREATE OR REPLACE FUNCTION range_agg3(anyelement)
RETURNS anyarray
AS $$
BEGIN
  RETURN ARRAY[$1];
END;
$$
LANGUAGE plpgsql;

paul=# select range_agg3(daterange('2016-05-04', '2016-07-01'));
 range_agg3
-
 {"[2016-05-04,2016-07-01)"}

But of course that is not as restricted as I'd like. Are there any
better options?

Thanks,
Paul



Re: FK v.s unique indexes

2018-07-05 Thread David G. Johnston
On Thu, Jul 5, 2018 at 2:36 PM, Rafal Pietrak  wrote:

>
>
> W dniu 05.07.2018 o 23:04, David G. Johnston pisze:
> > On Thu, Jul 5, 2018 at 1:45 PM, Rafal Pietrak  > >wrote:
> >
> > I was thinking, that when "add constraint" cannot choose appropriate
> > index, may be some explicit help (like ... using ;)
> would be
> > due.
> >
> >
> > ​Basically all the FK trigger does is:
> >
> > SELECT EXISTS(SELECT 1 FROM pk_table WHERE pk_col1 = val1 AND pk_col2 =
> > val2)
> > ​
> > And fails if query returns false.  The planner is still free to use the
> > index or not to execute the query just as if you had written it by
> > hand.  For a small table fully in memory it likely would prefer a
> > sequential scan and it would be perfectly within its rights to do so.
> >
>
> I must assume, that the above "WHERE pk_col1 - val1..." is not the same
> place as the one you've mentioned earlier, where "FK don't have where
> clausures".
>

​The FK definition doesn't have a WHERE clause so the only (and all)
columns used in the trigger are those defined by the constraint itself.
So, if a partial index for the above was:

UNIQUE INDEX (pk_col1, pk_col2) WHERE pktblcol3 = false;

and

FK (col1, col2) REFERENCES pk_table (pk_col1, pk_col2)​

​There is no place on the FK to reference "pktblcol3​" so that the
effective trigger query would become:

WHERE pk_col1 = val2 AND pk_col2 = val2 AND pktblcol3 = ???

And without pktblcol3 more than one row could be returned (so, not really
EXISTS...)


>
> Thus, the bulk of code that maintains FK consistency "does not use where
> clausures", would break on partial indexes. But in the above SELECT,
> with partial indexed, that select will never fail. So it is not the
> blocker... on the other hand, I would rather rewrite it along the lines of:
> SELECT 1 = (SELECT count(1) FROM pk_table WHERE pk_col1 = val1 AND
> pk_col2 = val2)
>

​Yeah, that's closer to reality
​

>
> Naturally, if the planner choses to do a seq scan, everything would
> break down - inappropriate rows would get hit; but that's a different
> story:
> 1. one may think, doing a "SELECT * FROM pk_table WHERE... INTO current"
> and making all constraint trigger functions use that "current" instead
> of making more lookups, could solve the problem.
> 2. or with some syntax help during FK creation (aka: ... add constraint
> ... using ) one could force the planner to always use
> indicated index.
>

​This comes back to constraints don't directly target indexes even though
indexes are used in implementation (it would be violation of scope).  They
target other constraints at a logical level.  Whether the code is written
exactly like that without evidence to the contrary it provides a sound
mental model to operate from.

I think we might get away from this for INSERT ON CONFLICT but I'm not that
well versed nor have time to look into it right now.

David J.
​


Re: FK v.s unique indexes

2018-07-05 Thread Rafal Pietrak



W dniu 05.07.2018 o 23:04, David G. Johnston pisze:
> On Thu, Jul 5, 2018 at 1:45 PM, Rafal Pietrak  >wrote:
> 
> I was thinking, that when "add constraint" cannot choose appropriate
> index, may be some explicit help (like ... using ;) would be
> due.
> 
> 
> ​Basically all the FK trigger does is:
> 
> SELECT EXISTS(SELECT 1 FROM pk_table WHERE pk_col1 = val1 AND pk_col2 =
> val2)
> ​
> And fails if query returns false.  The planner is still free to use the
> index or not to execute the query just as if you had written it by
> hand.  For a small table fully in memory it likely would prefer a
> sequential scan and it would be perfectly within its rights to do so.
> 

I must assume, that the above "WHERE pk_col1 - val1..." is not the same
place as the one you've mentioned earlier, where "FK don't have where
clausures".

Thus, the bulk of code that maintains FK consistency "does not use where
clausures", would break on partial indexes. But in the above SELECT,
with partial indexed, that select will never fail. So it is not the
blocker... on the other hand, I would rather rewrite it along the lines of:
SELECT 1 = (SELECT count(1) FROM pk_table WHERE pk_col1 = val1 AND
pk_col2 = val2)

Naturally, if the planner choses to do a seq scan, everything would
break down - inappropriate rows would get hit; but that's a different story:
1. one may think, doing a "SELECT * FROM pk_table WHERE... INTO current"
and making all constraint trigger functions use that "current" instead
of making more lookups, could solve the problem.
2. or with some syntax help during FK creation (aka: ... add constraint
... using ) one could force the planner to always use
indicated index.

Still, whatever way to go, it is well beyond my level.

-R



Re: FK v.s unique indexes

2018-07-05 Thread David G. Johnston
On Thu, Jul 5, 2018 at 1:45 PM, Rafal Pietrak  wrote:

> I was thinking, that when "add constraint" cannot choose appropriate
> index, may be some explicit help (like ... using ;) would be
> due.
>

​Basically all the FK trigger does is:

SELECT EXISTS(SELECT 1 FROM pk_table WHERE pk_col1 = val1 AND pk_col2 =
val2)
​
And fails if query returns false.  The planner is still free to use the
index or not to execute the query just as if you had written it by hand.
For a small table fully in memory it likely would prefer a sequential scan
and it would be perfectly within its rights to do so.

David J.


Re: FK v.s unique indexes

2018-07-05 Thread Rafal Pietrak



W dniu 05.07.2018 o 10:11, Rob Sargent pisze:
[---]
>>
>> Pls consider in real life: load (a person), can have either a (a kind of
>> brest cancer); or b (a kind of prostrate) - this is only a cooked
>> example attemping to illustrate, that one may need to put additional
>> constraints on the entire dataset.
>>
>  
> It’s difficult enough to define a unique person (without mother and father) 
> and certainly this weeks definition of burden is not likely to help matters.  
> If you’re main worry is data consistency you might be better off normalizing 
> your structure - either with separate tables per cancer type (person id, 
> cancer specifics; unique on person) or in a single table one per line (person 
> id, cancer type, cancer description; unique on person). You can reconstitue 
> person,breast,prostate from either of those.  We won’t quibble on one person 
> having both (though remotely possible, men do get breast cancer).
> 

No, no.

This was just cooked example, my reality (messaged between parties,
whose identity is stored in columns A and B, and column C keeping a flag
indicating who was the sender). I've just used gender example to limit
the discussion of whether A can be linked to something, or if B cannot
be linked to something else. It cannot, the constraints I described are
as real, as prostrate cancer never happening to a women. I tried to
eliminate from the discussion all but mathematical relations between
values of those columns. Which are:

(load,a,b,c) is unique
(load,a,true) is unique
(load,b,false) is unique

Nothing else matters.

And I need FK to such dataset. Partial.

There is one table with data only referencing (load,a, true)... and I'd
prefere that table NOT NEED to have the spurous column (c) containing
value "true" for every row in that table.

Same goes for (load,b,false).

Now it occured to me, I can do:
create table (load text, a int, b int, c int);
instead of...
create table (load text, a int, b int, c bool);

With the new layout I'll just do:
insert (load,a,b,c) values (, , , ); for the cases of
"true", and...
insert (load,a,b,c) values (, , , ); for other cases

(load,c) will be unique over the entire dataset.

Now I can FK to (a,c) from whenever I want. A computed column (d bool) =
(a = c) could be helpfull, but I don't think it'll be indispensable.

Sorry to mislead you.

-R



Re: FK v.s unique indexes

2018-07-05 Thread Rafal Pietrak



W dniu 05.07.2018 o 15:18, David G. Johnston pisze:
> On Thursday, July 5, 2018, Rafal Pietrak  > wrote:
> 
> W dniu 04.07.2018 o 00:55, David G. Johnston pisze:
> > On Tuesday, July 3, 2018, Rafal Pietrak  
> > >> wrote:
> >
> >
> >     ERROR:  there is no unique constraint matching given keys for
> referenced
> >     table "test2"
> >     
> >
> >     I cannot see any reasons why this functionality is blocked.
> >
> >     In particular, contrary to what the ERROR says, the target
> table *does
> >     have* a "unique constraint matching given keys", admittedly only
> >     partial.
> >
> >
> > You are making the common error of confusing the distinct concepts of
> > constraints and indexs.  Table constraints cannot be partial by
> > definition, and are a logical concept constraining the data model.
> 
> Hmmm..
> 
> This does not match "my reality". Naturally I may be wrong, but the
> example I've posted reflects my actual data I'm putting into the RDBMS.
> That is:
> 1. the data has unique constraint on (load,a,b,c)
> 2. and the data have additional unique constraints on (load,a), provided
> c is true, and (load,b) whenever c is false.
> 
> 
> I probably generalized too much, especially since exclusion constrains
> can be partial, so I'll be specific here.  Foreign Keys don't have where
> clauses and thus cannot target partial constraints.  If you want to
> overcome that limitation you can write a custom trigger.

Actually. It looks like I'm a really really slow learner :(

Only now I realized how should I code the scenario in question. For
those interested, I'm currently implementing it like this: Instead of
bool column "C", I'm putting there a column, which will keep a *copy* of
value from column A or B depending on "traditional value" (true/false)
of earlier column C. Now I can have a plain ordinary unique index over
(load,C), and have it as FK target for other tables. win-win.

I haven't realized it for years (this is how long the design stays with
me to this day).

> 
> I'm sure at least some of this is simply due to desirability as opposed
> to some fundamental limitation, but in the end that is how the system
> works today.  Integer-table FK relationships are defined over the entire
> PK table, not a subset.

In my simplistic view of postgresql internal, I was thinking, that the
engine (e.g. the set of internal triggers maintaining all FK consistency
as layed down by application schema), having a row of data (freshly
inserted or updated) just looks up an index it has associated with that
particilar FK, and uses it to see if the other end "is comliant", or
"has to change, too", or whatever else.

So I was thinking, that when FK has an index to use, it shouldn't matter
if it's complete or partial.

I was thinking, that when "add constraint" cannot choose appropriate
index, may be some explicit help (like ... using ;) would be
due.

But form the above explanation I fear that there is significantly more
to the full picture than I though. I only king of hoped those
"fundamental limitations" would be something I could grasp.

Anyway, although indireclty, this thread brought me a solution. This is
good.

thenx,

-R




Re: How to watch for schema changes

2018-07-05 Thread David G. Johnston
On Thu, Jul 5, 2018 at 10:07 AM, Igor Korot  wrote:

> On Thu, Jul 5, 2018 at 11:49 AM, David G. Johnston
>  wrote:
>
> > You can, and depending on how often you intend to execute said code, it
> is
> > probably the better way.  It also requires pl/pgsql while CREATE OR
> REPLACE
> > "just works" as a single SQL command.  It seems easier to give you the
> > simple answer than to work out the details for the more complex one.
>
> Is it the same from the DB server POV? Meaning it is also the same 1/2
> hit depending on the existence? Also performance-wise querying and this
> method is the same, right?
>

If you care about performance here I'd suggest you measure it.  The absence
of catalog bloat is the more meaningful benefit IMO.

If the function doesn't exist CREATE FUNCTION likely wins hands-down.  If
the function name/signature does exist you are starting to compare apples
and oranges.

David J.


Re: How to watch for schema changes

2018-07-05 Thread Adrian Klaver

On 07/05/2018 08:40 AM, Igor Korot wrote:

Hi, David,

On Tue, Jul 3, 2018 at 1:46 PM, David G. Johnston
 wrote:

On Tue, Jul 3, 2018 at 11:41 AM, Igor Korot  wrote:



I presume threre is a query which check for the function/trigger
existence? Something like:

IF NOT EXIST(SELECT * FROM ) CREATE OR REPLACE FUNCTION;



CREATE OR REPLACE is how you re-create a function that (whose
name/signature) might already exist; CREATE already assumes one doesn't
exist.


Why do I need to re-create a function with exactly the same name and body


If you use CREATE OR REPLACE FUNCTION it will do just that each time you 
call it. Seems like overkill to me. See below for another way.



Can't I just check if such function exists?


Something like this:

SELECT
count(*)
FROM
pg_proc AS proc
JOIN
pg_namespace AS ns
ON
proc.pronamespace = ns.oid
WHERE
ns.nspname='public' -- Function schema
AND
proname = 'tag_rcv_undo' -- Function name
;



Thank you.



David J.




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: As a table owner, can I grant "grant" ?

2018-07-05 Thread David G. Johnston
On Thu, Jul 5, 2018 at 9:58 AM, David Gauthier 
wrote:

> Postgres 9.5.2 on linux
>
> Given that I am not superuser, but DO have createrole...
> Can I grant some other role the ability to grant access to a table I
> created ?
>
> For Example:  I create a table called foo.  As the creator/owner of this
> table, I seem to have the ability to grant select,insert,update,delete
> (etc...) to other roles.  I would like to grant some other role the ability
> to grant access to my table.
>
>
​https://www.postgresql.org/docs/10/static/sql-grant.html

WITH GRANT OPTION

Basically, yes, but only the specific permissions ​granted to them having
the "with grant option" attribute.

David J.


As a table owner, can I grant "grant" ?

2018-07-05 Thread David Gauthier
Postgres 9.5.2 on linux

Given that I am not superuser, but DO have createrole...
Can I grant some other role the ability to grant access to a table I
created ?

For Example:  I create a table called foo.  As the creator/owner of this
table, I seem to have the ability to grant select,insert,update,delete
(etc...) to other roles.  I would like to grant some other role the ability
to grant access to my table.

Thanks in Advance !


Re: How to watch for schema changes

2018-07-05 Thread David G. Johnston
On Thu, Jul 5, 2018 at 8:40 AM, Igor Korot  wrote:

> Hi, David,
>
> On Tue, Jul 3, 2018 at 1:46 PM, David G. Johnston
>  wrote:
> > On Tue, Jul 3, 2018 at 11:41 AM, Igor Korot  wrote:
> >>
> >>
> >> I presume threre is a query which check for the function/trigger
> >> existence? Something like:
> >>
> >> IF NOT EXIST(SELECT * FROM ) CREATE OR REPLACE FUNCTION;
> >
> >
> > CREATE OR REPLACE is how you re-create a function that (whose
> > name/signature) might already exist; CREATE already assumes one doesn't
> > exist.
>
> Why do I need to re-create a function with exactly the same name and body?
> Can't I just check if such function exists?
>

You can, and depending on how often you intend to execute said code, it is
probably the better way.  It also requires pl/pgsql while CREATE OR REPLACE
"just works" as a single SQL command.  It seems easier to give you the
simple answer than to work out the details for the more complex one.

David J.


Re: How to watch for schema changes

2018-07-05 Thread Melvin Davidson
>As I said I'm writing the client application with libpq/ODBC. How will I
get
>the results?

Igor,
You DO NOT need libpq/ODBC . Just use the count option of grep (-c).
If it is greater than zero, then send a mail to yourself and/or attach the
log for review.

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Split daterange into sub periods

2018-07-05 Thread Hellmuth Vargas
Hi

select ($$[$$|| to_char(min(n.dato),'-MM-DD') || $$,$$ ||
to_char(max(n.dato),'-MM-DD') || $$]$$)::daterange,
daterange(min(n.dato)::date,max(n.dato)::date)
from (
select u.dato,anterior,(u.dato-anterior)::interval,sum(case when anterior
is null or (u.dato -anterior)::interval='1 day'::interval  then 0 else 1
end) over(order by u.dato) as grupo
from (
select u.dato, lag(u.dato) over( order by u.dato) as anterior, lead(u.dato)
over( order by u.dato)
from (
select * from generate_series(lower('[2018-01-01,
2018-01-31]'::daterange),upper('[2018-01-01, 2018-01-31]'::daterange),'1
day'::interval) as a(dato)
except
(
select generate_series(lower(a.dato),upper(a.dato),'1 day'::interval) from
(values('[2018-01-04,2018-01-06]'::daterange),('[2018-01-09,2018-01-12]'::daterange),('[2018-01-18,2018-01-19]'::daterange))
as a(dato)

)
) as u order by u.dato
) as u
) as n
group by grupo
order by 1



daterange|daterange
-+-
 [2018-01-01,2018-01-04) | [2018-01-01,2018-01-03)
 [2018-01-08,2018-01-09) | empty
 [2018-01-14,2018-01-18) | [2018-01-14,2018-01-17)
 [2018-01-21,2018-02-02) | [2018-01-21,2018-02-01)
(4 rows)




El jue., 5 de jul. de 2018 a la(s) 10:39, Andreas Kretschmer (
andr...@a-kretschmer.de) escribió:

>
>
> On 05.07.2018 15:49, hmidi slim wrote:
>
> Hi,
> I'm looking for splitting a daterange into many subperiods following this
> example:
>
> Base Date: [2018-01-01, 2018-01-31]
> overlapped_periods:
> 1- [ 2018-01-04, 2018-01-06]
> 2- [ 2018-01-09, 2018-01-12]
> 3- [ 2018-01-18, 2018-01-19]
>
> I try to get such a result:
> 1- [ 2018-01-01, 2018-01-03]
> 2- [ 2018-01-07, 2018-01-08]
> 3- [ 2018-01-13, 2018-01-17]
> 4- [ 2018-01-20, 2018-01-31]
>
> The operator '-' does not support this :
>
> SELECT daterange('2018-01-01', '2018-01-31', '[]') - daterange('2018-01-04', 
> '2018-01-06', '[]');
>
>
> I got this error:
>
> *ERROR: result of range difference would not be contiguous
>
> *
>
> Is there any operators to make the split of daterange?
>
> andreas@[local]:5432/test# \d hmidi
> Table "public.hmidi"
>  Column |   Type| Collation | Nullable | Default
> +---+---+--+-
>  id | integer   |   | not null |
>  d  | daterange |   |  |
> Indexes:
> "hmidi_pkey" PRIMARY KEY, btree (id)
>
> andreas@[local]:5432/test# insert into hmidi values
> (1,'[2018-01-04,2018-01-06]');INSERT 0 1
> andreas@[local]:5432/test# insert into hmidi values
> (2,'[2018-01-09,2018-01-12]');INSERT 0 1
> andreas@[local]:5432/test# insert into hmidi values
> (3,'[2018-01-18,2018-01-19]');INSERT 0 1
> andreas@[local]:5432/test# with month as (select s::date from
> generate_series('2018-01-01'::date, '2018-01-31'::date,'1day'::interval)
> s), tmp as ( select month.s, case when hmidi.d @> month.s then 1 else NULL
> end as covered from month left join hmidi on month.s <@ hmidi.d),tmp2 as (
> select *, coalesce((sum(case when covered = 1 then 1 else NULL end) over
> (order by s))+1,1) as p from tmp) select p, min(s), max(s) from tmp2 where
> covered is null group by p order by p;
>  p  |min |max
> ++
>   1 | 2018-01-01 | 2018-01-03
>   4 | 2018-01-07 | 2018-01-08
>   8 | 2018-01-13 | 2018-01-17
>  10 | 2018-01-20 | 2018-01-31
> (4 rows)
>
>
> Regards, Andreas
> --
> 2ndQuadrant Deutschland
>


-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.
Esp. Telemática y Negocios por Internet
Oracle Database 10g Administrator Certified Associate
EnterpriseDB Certified PostgreSQL 9.3 Associate


Re: How to watch for schema changes

2018-07-05 Thread Igor Korot
Hi, David,

On Tue, Jul 3, 2018 at 1:46 PM, David G. Johnston
 wrote:
> On Tue, Jul 3, 2018 at 11:41 AM, Igor Korot  wrote:
>>
>>
>> I presume threre is a query which check for the function/trigger
>> existence? Something like:
>>
>> IF NOT EXIST(SELECT * FROM ) CREATE OR REPLACE FUNCTION;
>
>
> CREATE OR REPLACE is how you re-create a function that (whose
> name/signature) might already exist; CREATE already assumes one doesn't
> exist.

Why do I need to re-create a function with exactly the same name and body?
Can't I just check if such function exists?

Thank you.

>
> David J.
>



Re: Split daterange into sub periods

2018-07-05 Thread Andreas Kretschmer



On 05.07.2018 15:49, hmidi slim wrote:

Hi,
I'm looking for splitting a daterange into many subperiods following 
this example:


Base Date: [2018-01-01, 2018-01-31]
overlapped_periods:
1- [ 2018-01-04, 2018-01-06]
2- [ 2018-01-09, 2018-01-12]
3- [ 2018-01-18, 2018-01-19]

I try to get such a result:
1- [ 2018-01-01, 2018-01-03]
2- [ 2018-01-07, 2018-01-08]
3- [ 2018-01-13, 2018-01-17]
4- [ 2018-01-20, 2018-01-31]

The operator '-' does not support this :
SELECT daterange('2018-01-01', '2018-01-31', '[]') - 
daterange('2018-01-04', '2018-01-06', '[]');

I got this error:
*ERROR: result of range difference would not be contiguous *
Is there any operators to make the split of daterange?

andreas@[local]:5432/test# \d hmidi
    Table "public.hmidi"
 Column |   Type    | Collation | Nullable | Default
+---+---+--+-
 id | integer   |   | not null |
 d  | daterange |   |  |
Indexes:
    "hmidi_pkey" PRIMARY KEY, btree (id)

andreas@[local]:5432/test# insert into hmidi values 
(1,'[2018-01-04,2018-01-06]');INSERT 0 1
andreas@[local]:5432/test# insert into hmidi values 
(2,'[2018-01-09,2018-01-12]');INSERT 0 1
andreas@[local]:5432/test# insert into hmidi values 
(3,'[2018-01-18,2018-01-19]');INSERT 0 1
andreas@[local]:5432/test# with month as (select s::date from 
generate_series('2018-01-01'::date, '2018-01-31'::date,'1day'::interval) 
s), tmp as ( select month.s, case when hmidi.d @> month.s then 1 else 
NULL end as covered from month left join hmidi on month.s <@ 
hmidi.d),tmp2 as ( select *, coalesce((sum(case when covered = 1 then 1 
else NULL end) over (order by s))+1,1) as p from tmp) select p, min(s), 
max(s) from tmp2 where covered is null group by p order by p;

 p  |    min |    max
++
  1 | 2018-01-01 | 2018-01-03
  4 | 2018-01-07 | 2018-01-08
  8 | 2018-01-13 | 2018-01-17
 10 | 2018-01-20 | 2018-01-31
(4 rows)


Regards, Andreas
--
2ndQuadrant Deutschland


Re: How to watch for schema changes

2018-07-05 Thread Igor Korot
Hi, Melvin,

On Tue, Jul 3, 2018 at 6:48 PM, Melvin Davidson  wrote:
>
>>I'm writing a client in C++ with libpq. So I will have to do a lot of
>> polling .
> Can't you just run a cron job?

And what?
As I said I'm writing the client application with libpq/ODBC. How will I get
the results?

Thank you.

>
> --
> Melvin Davidson
> Maj. Database & Exploration Specialist
> Universe Exploration Command – UXC
> Employment by invitation only!



Re: Cloning schemas

2018-07-05 Thread Łukasz Jarych
Melvin,

thank you once again ! Yes and this is working like a charm,
I love your function and file !

Best,
Jacek

czw., 5 lip 2018 o 16:53 Melvin Davidson  napisał(a):

>
>
>
> On Thu, Jul 5, 2018 at 10:38 AM, Łukasz Jarych  wrote:
>
>> You gave me working example.
>>
>> the function from here is not working:
>>
>>
>> https://www.postgresql.org/message-id/CANu8FiyJtt-0q%3DbkUxyra66tHi6FFzgU8TqVR2aahseCBDDntA%40mail.gmail.com
>>
>> Best,
>> Jacek
>>
>> czw., 5 lip 2018 o 16:02 Melvin Davidson 
>> napisał(a):
>>
>>>
>>>
>>> On Thu, Jul 5, 2018 at 1:49 AM, Łukasz Jarych 
>>> wrote:
>>>
 From link function is not working.

>>>
>>> There is no " From link" in PostgreSQL,
>>> Would you please be more specific. Please provide a working example.
>>>
>>>
> >the function from here is not working:
>
>
> >https://www.postgresql.org/message-id/CANu8FiyJtt-0q%3DbkUxyra66tHi6FFzgU8TqVR2aahseCBDDntA%40mail.gmail.com
> 
>
>
> Jacek,
>
> That is NOT the version for PostgreSQL 10. It is for 9.6 and below only!
> I attached the working function for 10 and it is attached again to this
> response.
>
>
> --
> *Melvin Davidson*
> *Maj. Database & Exploration Specialist*
> *Universe Exploration Command – UXC*
> Employment by invitation only!
>


Re: Cloning schemas

2018-07-05 Thread Melvin Davidson
On Thu, Jul 5, 2018 at 10:38 AM, Łukasz Jarych  wrote:

> You gave me working example.
>
> the function from here is not working:
>
> https://www.postgresql.org/message-id/CANu8FiyJtt-0q%
> 3DbkUxyra66tHi6FFzgU8TqVR2aahseCBDDntA%40mail.gmail.com
>
> Best,
> Jacek
>
> czw., 5 lip 2018 o 16:02 Melvin Davidson 
> napisał(a):
>
>>
>>
>> On Thu, Jul 5, 2018 at 1:49 AM, Łukasz Jarych  wrote:
>>
>>> From link function is not working.
>>>
>>
>> There is no " From link" in PostgreSQL,
>> Would you please be more specific. Please provide a working example.
>>
>>
>the function from here is not working:

>https://www.postgresql.org/message-id/CANu8FiyJtt-0q%
3DbkUxyra66tHi6FFzgU8TqVR2aahseCBDDntA%40mail.gmail.com



Jacek,

That is NOT the version for PostgreSQL 10. It is for 9.6 and below only!
I attached the working function for 10 and it is attached again to this
response.


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!

-- Function: public.clone_schema(text, text, boolean)

-- DROP FUNCTION public.clone_schema(text, text, boolean);

CREATE OR REPLACE FUNCTION public.clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
  RETURNS void AS
$BODY$
--  Initial code by Emanuel '3manuek' 
--  Last revision 2018-07-03 by Melvin Davidson
--  Added ' OVERRIDING SYSTEM VALUE' for INSERT records
--
--  This function will clone all sequences, tables, indexes, rules, triggers, 
--  data(optional), views & functions from any existing schema to a new schema
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  seq_oid  oid;
  func_oid oid;
  owner_id oid;
  con_oid  oid;
  v_path   text;
  v_func   text;
  v_args   text;
  v_connametext;
  v_rule   text;
  v_trig   text;
  object   text;
  buffer   text;
  srctbl   text;
  default_ text;
  v_column text;
  qry  text;
  dest_qry text;
  v_deftext;
  v_stat   integer;
  seqval   bigint;
  seq_name name;
  sq_type  oid;
  sq_last_valuebigint;
  sq_max_value bigint;
  sq_start_value   bigint;
  sq_increment_by  bigint;
  sq_min_value bigint;
  sq_cache_value   bigint;
  sq_log_cnt   bigint;
--  sq_is_called boolean;
  sq_is_cycled boolean;
  sq_typname   name;
  seq_cycled   text;
  seq_ownername;  
--  sq_cycledchar(10);
  
BEGIN

-- Check that source_schema exists
  SELECT oid INTO src_oid
FROM pg_namespace
   WHERE nspname = quote_ident(source_schema);
   
  IF NOT FOUND
THEN 
  RAISE NOTICE 'source schema % does not exist!', source_schema;
  RETURN ;
  END IF;

  -- Check that dest_schema does not yet exist
  PERFORM nspname 
FROM pg_namespace
   WHERE nspname = quote_ident(dest_schema);
   
  IF FOUND
THEN 
  RAISE NOTICE 'dest schema % already exists!', dest_schema;
  RETURN ;
  END IF;

  EXECUTE 'CREATE SCHEMA ' || quote_ident(dest_schema) ;
   
  -- Add schema comment
  SELECT description INTO v_def
FROM pg_description
   WHERE objoid = src_oid
 AND objsubid = 0;

  IF FOUND 
THEN 
  EXECUTE 'COMMENT ON SCHEMA ' || quote_ident(dest_schema) || ' IS ' || 
quote_literal(v_def);
  END IF;

  SET search_path TO dest_schema, source_schema, 'public', pg_catalog;  
  SELECT current_schemas(TRUE) INTO v_path;
  RAISE NOTICE 'search path = %', v_path;

  -- Create sequences
  -- TODO: Find a way to make this sequence's owner is the correct table.
  FOR object IN
 SELECT sequence_name::text 
  FROM information_schema.sequences
 WHERE sequence_schema = quote_ident(source_schema)

  LOOP

SELECT oid, relowner into seq_oid, owner_id
  FROM pg_class 
 WHERE relname = quote_ident(object)
   AND relkind = 'S';

SELECT seqtypid,
   seqstart,
   seqincrement,
   seqmax,
   seqmin,
   seqcache,
   seqcycle
  INTO sq_type, 
   sq_start_value,
   sq_increment_by,
   sq_max_value,
   sq_min_value,
   sq_cache_value,
   sq_is_cycled
  FROM pg_sequence
 WHERE seqrelid = seq_oid;
 
SELECT typname INTO sq_typname
  FROM pg_type 
 WHERE oid = sq_type;

SELECT rolname INTO seq_owner
  FROM pg_authid WHERE oid = owner_id;

--EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object);
--srctbl := quote_ident(source_schema) || '.' || quote_ident(object);

IF sq_is_cycled 
  THEN 
seq_cycled = ' 

Re: Split daterange into sub periods

2018-07-05 Thread Francisco Olarte
On Thu, Jul 5, 2018 at 4:16 PM, hmidi slim  wrote:
> In fact I'm trying to split a period in sub periods. Following this example
> :
> If I have a period =[2018-01-01, 2018-01-31] and two other periods
> [2018-01-04, 2018-01-06] and [2018-01-08, 2018-01-08].
> If I split the base period '[2018-01-01, 2018-01-31]' by the other two
> periods '[2018-01-04, 2018-01-06]' and '[2018-01-08, 2018-01-08]' I will got
> such a result:
> [2018-01-01, 2018-01-03]
> [2018-01-07, 2018-01-07]
> [2018-01-09, 2018-01-31].

Your example maybe fine to illustrate a definition, but it is totally
inadequate to describe what you want.

You'll, at least, need to refine your question. Something like "I have
a big period and a set of small periods, and want to calculate the set
of subperiods of the big one not covered by the small ones", and also
some conditions ( i.e., can the small periods interesect? are they
completely covered by the big one? can they include the endpoints of
the big one? Can they be empty? Does the result need to be minimal (
in the number of result periods ) ? ).

Even then, this is not trivial and, as said above, you will probably
need a function for it.


Francisco Olarte.



Re: Split daterange into sub periods

2018-07-05 Thread hmidi slim
In fact I'm trying to split a period in sub periods. Following this example
:
If I have a period =[2018-01-01, 2018-01-31] and two other periods
[2018-01-04, 2018-01-06] and [2018-01-08, 2018-01-08].
If I split the base period '[2018-01-01, 2018-01-31]' by the other two
periods '[2018-01-04, 2018-01-06]' and '[2018-01-08, 2018-01-08]' I will
got such a result:
[2018-01-01, 2018-01-03]
[2018-01-07, 2018-01-07]
[2018-01-09, 2018-01-31].


Split daterange into sub periods

2018-07-05 Thread David G. Johnston
On Thursday, July 5, 2018, hmidi slim  wrote:
>
> I got this error:
>
>
>
> *ERROR: result of range difference would not be contiguous*
>
> Is there any operators to make the split of daterang
>
> To refine what Adrian said, operators cannot return a setof result so this
is basically impossible.  The interface for such a behavior would have to
be a function.  You one someone already wrote elsewhere or you can write
your own.  There isn't one in core that I see.

David J.


Re: Split daterange into sub periods

2018-07-05 Thread Adrian Klaver

On 07/05/2018 06:49 AM, hmidi slim wrote:

Hi,
I'm looking for splitting a daterange into many subperiods following 
this example:


Base Date: [2018-01-01, 2018-01-31]
overlapped_periods:
1- [ 2018-01-04, 2018-01-06]
2- [ 2018-01-09, 2018-01-12]
3- [ 2018-01-18, 2018-01-19]


Overlapping what?
They are not overlapping each other.



I try to get such a result:
1- [ 2018-01-01, 2018-01-03]
2- [ 2018-01-07, 2018-01-08]
3- [ 2018-01-13, 2018-01-17]
4- [ 2018-01-20, 2018-01-31]


Not sure what the above represents.
Are you looking for the dates in the range [2018-01-01, 2018-01-31] that 
are not in the 3 sub-ranges at the top of the post?




The operator '-' does not support this :

SELECT daterange('2018-01-01', '2018-01-31', '[]') - 
daterange('2018-01-04', '2018-01-06', '[]');


I got this error:

*ERROR: result of range difference would not be contiguous


That is expected:

https://www.postgresql.org/docs/10/static/functions-range.html

"The union and difference operators will fail if the resulting range 
would need to contain two disjoint sub-ranges, as such a range cannot be 
represented."




*

Is there any operators to make the split of daterange?*
*


Not that I know of.











--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Cloning schemas

2018-07-05 Thread Melvin Davidson
On Thu, Jul 5, 2018 at 1:49 AM, Łukasz Jarych  wrote:

> From link function is not working.
>

There is no " From link" in PostgreSQL,
Would you please be more specific. Please provide a working example.


Split daterange into sub periods

2018-07-05 Thread hmidi slim
Hi,
I'm looking for splitting a daterange into many subperiods following this
example:

Base Date: [2018-01-01, 2018-01-31]
overlapped_periods:
1- [ 2018-01-04, 2018-01-06]
2- [ 2018-01-09, 2018-01-12]
3- [ 2018-01-18, 2018-01-19]

I try to get such a result:
1- [ 2018-01-01, 2018-01-03]
2- [ 2018-01-07, 2018-01-08]
3- [ 2018-01-13, 2018-01-17]
4- [ 2018-01-20, 2018-01-31]

The operator '-' does not support this :

SELECT daterange('2018-01-01', '2018-01-31', '[]') -
daterange('2018-01-04', '2018-01-06', '[]');

I got this error:



*ERROR: result of range difference would not be contiguous*

Is there any operators to make the split of daterange?


Re: FK v.s unique indexes

2018-07-05 Thread David G. Johnston
On Thursday, July 5, 2018, Rafal Pietrak  wrote:
>
> W dniu 04.07.2018 o 00:55, David G. Johnston pisze:
> > On Tuesday, July 3, 2018, Rafal Pietrak  > > wrote:
> >
> >
> > ERROR:  there is no unique constraint matching given keys for
> referenced
> > table "test2"
> > 
> >
> > I cannot see any reasons why this functionality is blocked.
> >
> > In particular, contrary to what the ERROR says, the target table
> *does
> > have* a "unique constraint matching given keys", admittedly only
> > partial.
> >
> >
> > You are making the common error of confusing the distinct concepts of
> > constraints and indexs.  Table constraints cannot be partial by
> > definition, and are a logical concept constraining the data model.
>
> Hmmm..
>
> This does not match "my reality". Naturally I may be wrong, but the
> example I've posted reflects my actual data I'm putting into the RDBMS.
> That is:
> 1. the data has unique constraint on (load,a,b,c)
> 2. and the data have additional unique constraints on (load,a), provided
> c is true, and (load,b) whenever c is false.
>

I probably generalized too much, especially since exclusion constrains can
be partial, so I'll be specific here.  Foreign Keys don't have where
clauses and thus cannot target partial constraints.  If you want to
overcome that limitation you can write a custom trigger.

I'm sure at least some of this is simply due to desirability as opposed to
some fundamental limitation, but in the end that is how the system works
today.  Integer-table FK relationships are defined over the entire PK
table, not a subset.

David J.


Inefficient plan on 10.4

2018-07-05 Thread Peter J. Holzer
I have this table:

Table "public.relation"
  Column   |Type | Collation | Nullable |   
Default
---+-+---+--+--
 parent| integer |   | not null | 
 child | integer |   | not null | 
 type  | character varying   |   |  | 
 sortorder | integer |   |  | 
 valid_from| timestamp without time zone |   | not null | now()
 from_job_queue_id | integer |   |  | 
 id| integer |   | not null | 
nextval('relation_id_seq'::regclass)
Indexes:
"relation_pkey" PRIMARY KEY, btree (id)
"relation_child_idx" btree (child)
"relation_parent_idx" btree (parent)
Foreign-key constraints:
"relation_child_fkey" FOREIGN KEY (child) REFERENCES concept(id) DEFERRABLE
"relation_parent_fkey" FOREIGN KEY (parent) REFERENCES concept(id) 
DEFERRABLE

which has about 150 million rows:

wdsah=> select count(*) from relation;
   count   
---
 147810590
(1 row)

I'm trying to get to get the siblings of a node (including itself):

select r2.parent, r2.type, r2.child
from relation r1, relation r2
where r1.child=643541 and r2.parent=r1.parent
order by r2.type

This worked fine on 9.5, but on 10.4 it takes several seconds:

 QUERY 
PLAN  
-
 Gather Merge  (cost=12032221.68..25195781.12 rows=112822632 width=15) (actual 
time=4086.255..4086.257 rows=2 loops=1)
   Workers Planned: 2
   Workers Launched: 2
   ->  Sort  (cost=12031221.66..12172249.95 rows=56411316 width=15) (actual 
time=4080.862..4080.862 rows=1 loops=3)
 Sort Key: r2.type
 Sort Method: quicksort  Memory: 25kB
 ->  Nested Loop  (cost=0.57..1876228.04 rows=56411316 width=15) 
(actual time=3036.547..4080.826 rows=1 loops=3)
   ->  Parallel Seq Scan on relation r1  (cost=0.00..1856722.83 
rows=1 width=4) (actual time=3036.525..4080.802 rows=0 loops=3)
 Filter: (child = 643541)
 Rows Removed by Filter: 49270196
   ->  Index Scan using relation_parent_idx on relation r2  
(cost=0.57..17041.69 rows=246351 width=15) (actual time=0.053..0.055 rows=2 
loops=1)
 Index Cond: (parent = r1.parent)
 Planning time: 0.418 ms
 Execution time: 4090.442 ms

If I disable sequential scans, it works fine again:

  QUERY 
PLAN   
---
 Sort  (cost=25246497.92..25584965.81 rows=135387158 width=15) (actual 
time=0.119..0.119 rows=2 loops=1)
   Sort Key: r2.type
   Sort Method: quicksort  Memory: 25kB
   ->  Nested Loop  (cost=1.14..19513.79 rows=135387158 width=15) (actual 
time=0.101..0.104 rows=2 loops=1)
 ->  Index Scan using relation_child_idx on relation r1  
(cost=0.57..8.59 rows=1 width=4) (actual time=0.079..0.080 rows=1 loops=1)
   Index Cond: (child = 643541)
 ->  Index Scan using relation_parent_idx on relation r2  
(cost=0.57..17041.69 rows=246351 width=15) (actual time=0.018..0.020 rows=2 
loops=1)
   Index Cond: (parent = r1.parent)
 Planning time: 0.446 ms
 Execution time: 0.165 ms

There are several points about these plans that I don't understand:

* Why does the nested loop expect 56E6 or even 135E6 rows? It expects 1
  row for the outer table and then 246351 rows in the inner table for
  each of them. 1 * 246351 == 246351. So it should expect 246351 rows.
  (246351 itself is way too high, but the table has a very skewed
  distribution, and I've already set the statistics target to the
  maximum of 1, so there's not much I can do about that)

* Why does the Parallel Seq Scan report actual rows=0? It did return 1
  row (or is that the average per worker? That would be < 1, and
  probably rounded down to 0)

hp

PS: The obvious workaround is to remove "order by r2.type". I can easily
get the required partial order in the application. But I'd like to
understand what the optimizer is doing here.


-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   

Re: FK v.s unique indexes

2018-07-05 Thread pinker
David Rowley-3 wrote
> I don't think there were any actual roadblocks, it was more of just
> not enough time in the cycle to make it work due to a few technical
> details that required extra effort to make work.
> 
> Alvaro managed to simplify the problem and allow foreign keys to be
> defined on partitioned tables and get that into PG11.
> 
> So it was a case of 50% is better than 0%, which I very agree with.

That's a really great news. I was waiting for this feature for many years.
Finally! Thank you guys!



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: FK v.s unique indexes

2018-07-05 Thread Rob Sargent



> On Jul 5, 2018, at 1:30 AM, Rafal Pietrak  wrote:
> 
> 
> 
> W dniu 04.07.2018 o 00:55, David G. Johnston pisze:
>> On Tuesday, July 3, 2018, Rafal Pietrak > > wrote:
>> 
>> 
>>ERROR:  there is no unique constraint matching given keys for referenced
>>table "test2"
>>
>> 
>>I cannot see any reasons why this functionality is blocked.
>> 
>>In particular, contrary to what the ERROR says, the target table *does
>>have* a "unique constraint matching given keys", admittedly only
>>partial.
>> 
>> 
>> You are making the common error of confusing the distinct concepts of
>> constraints and indexs.  Table constraints cannot be partial by
>> definition, and are a logical concept constraining the data model. 
> 
> Hmmm..
> 
> This does not match "my reality". Naturally I may be wrong, but the
> example I've posted reflects my actual data I'm putting into the RDBMS.
> That is:
> 1. the data has unique constraint on (load,a,b,c)
> 2. and the data have additional unique constraints on (load,a), provided
> c is true, and (load,b) whenever c is false.
> 
> Pls consider in real life: load (a person), can have either a (a kind of
> brest cancer); or b (a kind of prostrate) - this is only a cooked
> example attemping to illustrate, that one may need to put additional
> constraints on the entire dataset.
> 
 
It’s difficult enough to define a unique person (without mother and father) and 
certainly this weeks definition of burden is not likely to help matters.  If 
you’re main worry is data consistency you might be better off normalizing your 
structure - either with separate tables per cancer type (person id, cancer 
specifics; unique on person) or in a single table one per line (person id, 
cancer type, cancer description; unique on person). You can reconstitue 
person,breast,prostate from either of those.  We won’t quibble on one person 
having both (though remotely possible, men do get breast cancer).




Re: FK v.s unique indexes

2018-07-05 Thread Rafal Pietrak



W dniu 03.07.2018 o 11:23, David Rowley pisze:
> On 3 July 2018 at 19:30, Rafal Pietrak  wrote:
[---]
>>
>> Why is this forbidden?
> 
> I don't think there were any actual roadblocks, it was more of just
> not enough time in the cycle to make it work due to a few technical
> details that required extra effort to make work.
> 
> Alvaro managed to simplify the problem and allow foreign keys to be
> defined on partitioned tables and get that into PG11.
> 
> So it was a case of 50% is better than 0%, which I very agree with.
> 

I see.

It's great and usefull, yes. And I'm looking forward to see the other 50% :)

Thenx!!

-R