Proposition for columns expanding: table_name.**

2021-05-22 Thread Eugen Konkov
Hello

I some time ago asks about "Proposition for autoname columns"
https://www.postgresql.org/message-id/13139.20201102170529%40yandex.ru


Now I have another idea. How about table_name.**?

which will be expanded to: table_name.id, table_name.name, table_name.qty etc.


In my original query I can not just write:
SELECT
  acc_i.*,
  acc_u.*
FROM "order_bt" o
LEFT JOIN acc_ready( 'Invoice',  app_period(), o ) acc_i  ON acc_i.ready
LEFT JOIN acc_ready( 'Usage',   app_period(), o ) acc_u ON acc_u.ready

because  I  can not then refer columns from different tables, they are same =(

So I need to write:
SELECT
   acc_i.ready   as acc_i_ready,
   acc_i.acc_period  as acc_i_period,
   acc_i.consumed_period   as acc_i_consumed_period,
   acc_u.ready   as acc_u_ready,
   acc_u.acc_period  as acc_u_period,
   acc_u.consumed_period   as acc_u_consumed_period,
FROM "order_bt" o
LEFT JOIN acc_ready( 'Invoice',  app_period(), o ) acc_i  ON acc_i.ready
LEFT JOIN acc_ready( 'Usage',   app_period(), o ) acc_u ON acc_u.ready


It would be cool if I can just write:

SELECT
  acc_i.**,
  acc_u.**
FROM "order_bt" o
LEFT JOIN acc_ready( 'Invoice',  app_period(), o ) acc_i  ON acc_i.ready
LEFT JOIN acc_ready( 'Usage',   app_period(), o ) acc_u ON acc_u.ready


What you can say about this proposition?
-- 
Best regards,
Eugen Konkov





Re: [PATCH] distinct aggregates within a window function WIP

2021-05-13 Thread Eugen Konkov
I resolve my problem  https://stackoverflow.com/a/67167595/4632019:

Could it be possible PG will use `filter` trick when DISTINCT is used: `sum 
(distinct suma)`?
This will benefit to not write second SELECT


https://www.postgresql.org/message-id/CAN1PwonqojSAP_N91zO5Hm7Ta4Mdib-2YuUaEd0NP6Fn6XutzQ%40mail.gmail.com
>About yours additional note, I think that it is not possible to get easy
the same result with appropriate use of window framing options, 

Can you try next approach?



My approach is 
https://dbfiddle.uk/?rdbms=postgres_13=01c699f3f47ca9fca8215f8cbf556218:
Assign row_number at each order: row_number() over (partition by agreement_id, 
order_id ) as nrow
Take only first suma: filter nrow = 1



with data as (
  select * from (values
  ( 1, 1, 1, 1.0049 ), (2, 1,1,1.0049), ( 3, 1,1,1.0049 ) ,
  ( 4, 1, 2, 1.0049 ), (5, 1,2,1.0057),
  ( 6, 2, 1, 1.53 ), ( 7,2,1,2.18), ( 8,2,2,3.48 )
 ) t (id, agreement_id, order_id, suma)
),
intermediate as (select
 *,
 row_number() over (partition by agreement_id, order_id ) as nrow,
 (sum( suma ) over ( partition by agreement_id, order_id ))::numeric( 10, 2) as 
order_suma,
from data)

select
  *,
  sum( order_suma ) filter (where nrow = 1) over (partition by agreement_id)
from intermediate


Wednesday, April 22, 2020, 10:05:19 AM, you wrote:

> Hello Andreas,

> Tuesday, April 21, 2020, 6:17:00 PM, you wrote:

>> On 4/21/20 5:06 PM, Eugen Konkov wrote:
>>> Hi. I read the thread.

>>> Probably this fiddle will be helpful for testing:

>>> https://dbfiddle.uk/?rdbms=postgres_12=abe845142a5099d921d3729043fb8491

>>> I recently encountered a problem:
>>> Why Window-specific functions do not allow DISTINCT to be used within the 
>>> function argument list?

>>> sum( DISTINCT order_cost ) OVER ( PARTITION BY invoice_id ORDER BY 
>>> invoice_id, group_id RANGE unbound preceeding and unbound following )

>>> behavior is quite deterministic:

>>> ORDER BY will create peers in partition
>>> DISTINCT will get only one peer

>>> I  resolve  my problem via two subqueries, but it seems this logic may
>>> be applied to window functions (did not check this for other functions 
>>> thought)

>> Sorry, I do not follow. What problem did you encounter?

> Lack of DISTINCT for window function SUM






--
Best regards,
Eugen Konkov  

Re: Feature Request: Report additionally error value

2020-12-08 Thread Eugen Konkov
o).package_id?AND 
pd.resource_type_id IS NOT DISTINCT FROM (ocd.c).resource_type_id?AND 
pd.service_type_id  IS NOT DISTINCT FROM (ocd.c).service_type_id?  LEFT JOIN 
resource_type rt  ON rt.id  = (ocd.c).resource_type_id?  LEFT JOIN service_type 
 st  on st.id  = (ocd.c).service_type_id?)??SELECT *,?  (group_suma/6) 
::numeric( 10, 2 ) as group_nds,?  (SELECT sum(x) from (SELECT sum( DISTINCT 
group_suma )   AS x FROM usage sub_u WHERE 
sub_u.agreement_id = usage.agreement_id GROUP BY agreement_id, order_id) t) as 
total_suma,?  (SELECT sum(x) from (SELECT sum( DISTINCT 
(group_suma/6)::numeric( 10, 2 ) ) AS x FROM usage sub_u WHERE 
sub_u.agreement_id = usage.agreement_id GROUP BY agreement_id, order_id) t) as 
total_nds?FROM usage?where ? <> 3  OR  consumed_count > 1?ORDER BY?  /* put 
order first then allocated resource without Order */?  agreement,?  order_id,?  
bill_range,?  group_id,?  sort_order nulls last,?  detail_type  nulls last,?  
price desc   nulls last,?  detail_amount desc,?  service_type nulls last,?  
detail_id?)?) "me"" with ParamValues: 1='2020-08-01', 2='2', 3='2', 4=undef, 
5='2', 6='3493', 7='10', 8='2', 9='2'] at 
/home/kes/work/projects/tucha/monkeyman/lib/MaitreD/Controller/Cart.pm line 828


--
Best regards,
Eugen Konkov



-- 
Best regards,
Eugen KonkovDBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st execute 
failed: ERROR:  range lower bound must be less than or equal to range upper 
bound
CONTEXT:  SQL function "accounting_ready" statement 1 [for Statement "SELECT 
COUNT( * ) FROM (( WITH
target_date AS ( SELECT ?::timestamptz ),
target_order as (
  SELECT
usage_range as bill_range,
o.*
  FROM ( SELECT * FROM "order_bt" WHERE sys_period @> sys_time() ) o
  LEFT JOIN period prd on prd.id = o.period_id
  LEFT JOIN accounting_ready(
CASE WHEN prd.INTERVAL = '1 mon' THEN date_trunc( 'month', o.docdate ) ELSE 
o.docdate END,
prd.INTERVAL, (SELECT * FROM target_date)
  ) acc ON true
  WHERE FALSE
OR ? = 0
OR ? = 1 AND o.id = ?
  AND acc.usage AND EXISTS (
SELECT * FROM order_bt prev_order
WHERE sys_period @> sys_time() AND
  prev_order.id = o.id  AND prev_order.app_period && acc.usage_range
  )
  AND o.app_period && acc.usage_range
OR ? = 2 AND o.agreement_id = ? and o.period_id = ?
  AND acc.usage AND EXISTS (
SELECT * FROM order_bt prev_order
WHERE sys_period @> sys_time() AND
  prev_order.id = o.id  AND prev_order.app_period && acc.usage_range
  )
  AND o.app_period && acc.usage_range
OR ? = 3
),
USAGE AS ( SELECT
  (ocd.o).id  as order_id,
  (dense_rank() over (PARTITION BY o.agreement_id, o.id ORDER BY 
(ocd.ic).consumed_period )) as group_id,
  (ocd.c).id  as detail_id,
  (ocd.c).service_type_id as service_type,
  (ocd.c).resource_type_idas detail_type,
  (ocd.c).amount  as detail_amount,
  (ocd.c).allocated_resource_id   as resource_id,
  nullas resource_uuid,
  nullas resource_desc,
  rt.unit as resource_unit,
  -- count changes. Logic is next: How many times configration for this order 
is met at this period
  count(*) OVER (PARTITION BY (ocd.o).id, (ocd.c).id )  as consumed_count,
  (ocd.ic).consumed   as consumed_days,
  nullas consumed_amount,
  nullas consumed_last,

  a.idas agreement_id,
  coalesce( substring( a.docn from '^A?(.*?)(?:\s*-[^-]*)?$' ), a.docn ) as 
agreement,
  a.docdate   as docdate,

  pkg.idas package_id,
  pkg.link_1c_idas package_1c_id,
  coalesce( pkg.display, pkg.name ) as package,

  coalesce( st.display, st.name, rt.display, rt.name )  as sr_name,
  COALESCE( (ocd.p).label, rt.LABEL )   as unit_label,

  coalesce( (ocd.c).sort_order, pd.sort_order ) as sort_order,

  ocd.item_priceAS price,
  -- We want to display QTY for resources too
  coalesce( ocd.item_qty, (ocd.c).amount/rt.unit )  AS qty,
  0 AS month_length,
  0 AS days_count,
  o.bill_range,
  lower( (ocd.ic).consumed_period )  as consumed_from,
  upper( (ocd.ic).consumed_period ) -interval '1sec' as consumed_till,


  ocd.item_suma,

  0 as discount,
  (sum( ocd.item_suma ) OVER( PARTITION BY (ocd.o).id, (ocd.ic).consumed_period 
))::numeric( 10, 2) AS group_suma,
  (sum( ocd.item_cost ) OVER( PARTITION BY (ocd.o).id, (ocd.ic).consumed_period 
))::numeric( 10, 2) AS order_cost
FROM target_order o
LEFT JOIN order_cost_details( o.bill_range ) ocd
  ON (ocd.o).id = o.id  AND  (ocd.ic

Re: Feature Request: Report additionally error value

2020-12-08 Thread Eugen Konkov
d.c).service_type_id
  LEFT JOIN resource_type rt  ON rt.id  = (ocd.c).resource_type_id
  LEFT JOIN service_type  st  on st.id  = (ocd.c).service_type_id
)

SELECT *,
  (group_suma/6) ::numeric( 10, 2 ) as group_nds,
  (SELECT sum(x) from (SELECT sum( DISTINCT group_suma )   
AS x FROM usage sub_u WHERE sub_u.agreement_id = usage.agreement_id GROUP BY 
agreement_id, order_id) t) as total_suma,
  (SELECT sum(x) from (SELECT sum( DISTINCT (group_suma/6)::numeric( 10, 2 ) ) 
AS x FROM usage sub_u WHERE sub_u.agreement_id = usage.agreement_id GROUP BY 
agreement_id, order_id) t) as total_nds
FROM usage
where ? <> 3  OR  consumed_count > 1
ORDER BY
  /* put order first then allocated resource without Order */
  agreement,
  order_id,
  bill_range,
  group_id,
  sort_order nulls last,
  detail_type  nulls last,
  price desc   nulls last,
  detail_amount desc,
  service_type nulls last,
  detail_id
)
) "me"" with ParamValues: 1='2020-08-01', 2='2', 3='2', 4=undef, 5='2', 
6='3493', 7='10', 8='2', 9='2'] at 
/home/kes/work/projects/tucha/monkeyman/lib/MaitreD/Controller/Cart.pm line 828


-- 
Best regards,
Eugen Konkov

Feature Request: Report additionally error value

2020-11-28 Thread Eugen Konkov
Hi all.

I often fall into error like this:

DBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st execute 
failed: ERROR:  timestamp out of range
CONTEXT:  SQL function "accounting_ready" statement 1 [for Statement "SELECT 
COUNT( * ) FROM (WITH
target_date AS ( SELECT ?::timestamptz ),
target_order as (
  SELECT
invoice_range as bill_range,
o.*
  FROM ( SELECT * FROM "order_bt" WHERE sys_period @> sys_time() ) o
  LEFT JOIN period prd on prd.id = o.period_id
  LEFT JOIN accounting_ready(
.
other 200 lines of query

Would be nice if here also will be reported error value.
It will shed more light on what is comming wrong

Also would be useful if PG point at query where this bad value was calculated 
or occur.

It this possible?

Thank you.


-- 
Best regards,
Eugen Konkov

Re: Proposition for autoname columns

2020-11-12 Thread Eugen Konkov

> On 2020-Nov-12, Tom Lane wrote:

>> On the whole, I'm on the side of the people who don't want to change this.
>> The implementation cost seems likely to greatly outweigh the value, plus
>> it feels more like a wart than a feature.

> I think if Eugen wants to spend some time with it and see how it could
> be implemented, then sent a patch for consideration, then we could make
> a better informed decision.  My own opinion is that it's not worth the
> trouble, but I'd rather us not stand in his way if he wants to try
> (With disclaimer that we might end up not liking the patch, of course).

Sorry, I am not C/C++ programmist and do not imagine how to start to patch.
I do not know internals of PG. The only useful thing from me is just that idea 
to make world better.

I suppose initially there were only ?column?, later names were implemented for 
count, sum etc
But it will be cool if PG will do step further and name sum( a ) as  sum_a 
instead of just sum

The purpose of this proposition is not about correct name generation, the 
purpose to get
more distinct default names:
?column?, ?column?, ?column?, ?column?, ?column?, ?column?, ?column?, 

?count?, ?count?, ?count?, ?sum?, ?sum?, ?sum?, ?sum?

?count_a?, ?count_b?, ?count_c?, ?sum_a?, ?sum_b?, ?sum_c?, ?sum_d?

Notice, that latest is more robust that first ;-)

I  suppose  we  just  ignore  comlex  cases  and left them as they are
current.  We  could  try some very very small step at the direction to
improve  default  names  and  see  feed back from many users how it is
useful  or  not. Then we can decide it worth or not to implement whole
system for default name generation.

Unfortunately I am not judje at which level those should occur: parser, 
analiser or so.
I just does not understand those things =(

Thank you.

-- 
Best regards,
Eugen Konkov





Re: Proposition for autoname columns

2020-11-12 Thread Eugen Konkov
Hello Andrew,

Thursday, November 12, 2020, 3:19:39 PM, you wrote:


> On 11/11/20 7:55 PM, Bruce Momjian wrote:
>> On Thu, Nov 12, 2020 at 12:18:49AM +, Dagfinn Ilmari Mannsåker wrote:
>>> Bruce Momjian  writes:
>>>> I think we could do it, but it would only work if the column was output
>>>> as a single json value, and not a multi-key/value field.  I am afraid if
>>>> we tried to do it, the result would be too inconsistent to be useful.
>>> Could this be done via the support function, so that the top-level
>>> operator/function in each select list item can return a suggested column
>>> name if the relevant arguments are constants?
>> Yes, the user explicitly calling a function would be much easier to
>> predict.
>>


> I suspect this is doomed to failure. There is no guarantee that the path
> expression is going to be static or constant across rows. Say you have
> this table:


> x: foo, j: {"foo": 1, "bar": 2}

> x: bar  j: {"foo": 3, "bar": 4}


> and you say:


>   select j->>x from mytable;
> What should the column be named?

Suppose it should be named 'as x'


> I think we'd be trying to manage a set of corner cases, and all because
> someone didn't want to put "as foo" in their query. And if we generate a
> column name in some cases and not in others there will be complaints of
> inconsistency.


> cheers


> andrew


> --
> Andrew Dunstan
> EDB: https://www.enterprisedb.com




-- 
Best regards,
Eugen Konkov





Re: Feature request: Improve allowed values for generate series

2020-11-12 Thread Eugen Konkov
Title: Re: Feature request: Improve allowed values for generate series


Hello David,

I have a table with services, each service have a period. After which service is auto renewal

Services also could be one-time. At this case its interval is '00:00:00'

The renewal is calculated via generate_series, when interval '00:00:00' pass to that function
query died =(

Generate dates for one time service:
test=> SELECT * FROM generate_series( '2020-11-09', '2020-11-09', INTERVAL '00:00:00' );
    generate_series

 2020-11-09 00:00:00+02
(1 row)

Generate dates for auto-renew service:
test=> SELECT * FROM generate_series( '2020-11-09', '2020-11-10', INTERVAL '1 day' );
    generate_series    

 2020-11-09 00:00:00+02
 2020-11-10 00:00:00+02
(2 rows)

So it is useful in my case. Also behavior is not surprising.



Wednesday, November 11, 2020, 9:17:28 PM, you wrote:





On Wed, Nov 11, 2020 at 12:12 PM Eugen Konkov <kes-...@yandex.ru> wrote:





> So   I   feature  request  to  allow  zero size step for cases when start point is equest to finish

> What do you think?



hm  probably  with  step 0 we always should generate series of one
value and exit, despite on finish value.
Because  with  step  0 we always stay at current position, so there is
always should be just one value.




How is this better than writing "VALUES (start date)"?

David J.






--
Best regards,
Eugen Konkov





Re: Proposition for autoname columns

2020-11-11 Thread Eugen Konkov
Hello Bruce,

Wednesday, November 11, 2020, 5:56:08 PM, you wrote:

> On Mon, Nov  2, 2020 at 05:05:29PM +0200, Eugen Konkov wrote:
>> Hello Pgsql-hackers,
>> 
>> When selecting data from json column it named as '?column?'
>> tucha=# select info->>'suma', docn from document order by id desc limit 5;
>>  ?column? | docn 
>> --+--
>>  665.97   | 695
>>  513.51   | 632
>>  665.97   | 4804
>>  492.12   | 4315
>>  332.98   | 1302
>> (5 rows)
>> 
>> It would be useful if the name of column will be autoassigned based on
>> name of json key. Like at next query:
>> 
>> tucha=# select info->>'suma' as suma, docn from document order by id desc 
>> limit 5;
>>   suma  | docn 
>> +--
>>  665.97 | 695
>>  513.51 | 632
>>  665.97 | 4804
>>  492.12 | 4315
>>  332.98 | 1302
>> (5 rows)
>> 
>> 
>> Would it be useful this auto assigned name for column from json?

> I think we could do it, but it would only work if the column was output
> as a single json value, and not a multi-key/value field.  I am afraid if
> we tried to do it, the result would be too inconsistent to be useful.


cool, thank you.

-- 
Best regards,
Eugen Konkov





Re: Feature request: Improve allowed values for generate series

2020-11-11 Thread Eugen Konkov
Hello Eugen,

Wednesday, November 11, 2020, 8:50:59 PM, you wrote:

> Hello Pgsql-hackers,

> Seems I fall into corner case:

test=>> SELECT * FROM generate_series( '2020-11-09', '2020-11-09', INTERVAL 
'00:00:00' );
> ERROR:  step size cannot equal zero

> But:
test=>> SELECT * FROM generate_series( '2020-11-09', '2020-11-10', INTERVAL '1 
day' );
> generate_series 
> 
>  2020-11-09 00:00:00+02
>  2020-11-10 00:00:00+02
> (2 rows)

> Here  we  start  at  2020-11-09, add interval of one day and finish at 
> 2020-11-10
> Done! series is generated.

> In  first  case  I  expect that I start at 2020-11-09, add interval of zero 
> and finish at 2020-11-09
> Everything is consistent.

test=>> SELECT * FROM generate_series( '2020-11-09', '2020-11-09', INTERVAL 
'00:00:00' );
> generate_series
> 
>  2020-11-09 00:00:00+02
> (1 row)


> So   I   feature  request  to  allow  zero size step for cases when start 
> point is equest to finish

> What do you think?



hm  probably  with  step 0 we always should generate series of one
value and exit, despite on finish value.
Because  with  step  0 we always stay at current position, so there is
always should be just one value.

-- 
Best regards,
Eugen Konkov





Feature request: Improve allowed values for generate series

2020-11-11 Thread Eugen Konkov
Hello Pgsql-hackers,

Seems I fall into corner case:

test=> SELECT * FROM generate_series( '2020-11-09', '2020-11-09', INTERVAL 
'00:00:00' );
ERROR:  step size cannot equal zero

But:
test=> SELECT * FROM generate_series( '2020-11-09', '2020-11-10', INTERVAL '1 
day' );
generate_series 

 2020-11-09 00:00:00+02
 2020-11-10 00:00:00+02
(2 rows)

Here  we  start  at  2020-11-09, add interval of one day and finish at 
2020-11-10
Done! series is generated.

In  first  case  I  expect that I start at 2020-11-09, add interval of zero and 
finish at 2020-11-09
Everything is consistent.

test=> SELECT * FROM generate_series( '2020-11-09', '2020-11-09', INTERVAL 
'00:00:00' );
generate_series

 2020-11-09 00:00:00+02
(1 row)


So   I   feature  request  to  allow  zero size step for cases when  start 
point is equest to finish

What do you think?


-- 
Best regards,
Eugen Konkov





Proposition for autoname columns

2020-11-02 Thread Eugen Konkov
Hello Pgsql-hackers,

When selecting data from json column it named as '?column?'
tucha=# select info->>'suma', docn from document order by id desc limit 5;
 ?column? | docn 
--+--
 665.97   | 695
 513.51   | 632
 665.97   | 4804
 492.12   | 4315
 332.98   | 1302
(5 rows)

It would be useful if the name of column will be autoassigned based on
name of json key. Like at next query:

tucha=# select info->>'suma' as suma, docn from document order by id desc limit 
5;
  suma  | docn 
+--
 665.97 | 695
 513.51 | 632
 665.97 | 4804
 492.12 | 4315
 332.98 | 1302
(5 rows)


Would it be useful this auto assigned name for column from json?

-- 
Best regards,
Eugen Konkov





Re: [PATCH] distinct aggregates within a window function WIP

2020-04-22 Thread Eugen Konkov
Hello Andreas,

Tuesday, April 21, 2020, 6:17:00 PM, you wrote:

> On 4/21/20 5:06 PM, Eugen Konkov wrote:
>> Hi. I read the thread.
>> 
>> Probably this fiddle will be helpful for testing:
>> 
>> https://dbfiddle.uk/?rdbms=postgres_12=abe845142a5099d921d3729043fb8491
>> 
>> I recently encountered a problem:
>> Why Window-specific functions do not allow DISTINCT to be used within the 
>> function argument list?
>> 
>> sum( DISTINCT order_cost ) OVER ( PARTITION BY invoice_id ORDER BY 
>> invoice_id, group_id RANGE unbound preceeding and unbound following )
>> 
>> behavior is quite deterministic:
>> 
>> ORDER BY will create peers in partition
>> DISTINCT will get only one peer
>> 
>> I  resolve  my problem via two subqueries, but it seems this logic may
>> be applied to window functions (did not check this for other functions 
>> thought)

> Sorry, I do not follow. What problem did you encounter?

Lack of DISTINCT for window function SUM



-- 
Best regards,
Eugen Konkov





[PATCH] distinct aggregates within a window function WIP

2020-04-21 Thread Eugen Konkov
Hi. I read the thread. 

Probably this fiddle will be helpful for testing:

https://dbfiddle.uk/?rdbms=postgres_12=abe845142a5099d921d3729043fb8491

I recently encountered a problem:
Why Window-specific functions do not allow DISTINCT to be used within the 
function argument list?

sum( DISTINCT order_cost ) OVER ( PARTITION BY invoice_id ORDER BY invoice_id, 
group_id RANGE unbound preceeding and unbound following )

behavior is quite deterministic:

ORDER BY will create peers in partition
DISTINCT will get only one peer

I  resolve  my problem via two subqueries, but it seems this logic may
be applied to window functions (did not check this for other functions thought)

-- 
Best regards,
Eugen Konkov





Implementation DISTINCT for window aggregate function: SUM

2020-04-18 Thread Eugen Konkov
Hello PostgreSQL-development,

Oracle has implementation:

select id, amount, sum(DISTINCT amount) over () as total
  from xx;


https://dbfiddle.uk/?rdbms=oracle_18=8eeb60183ec9576ddb4b2c9f2874d09f


Why this is not possible in PG?
https://dbfiddle.uk/?rdbms=postgres_12=97c05203af4c927ff9f206e164752767


Why Window-specific functions do not allow DISTINCT to be used within the 
function argument list.?
Which problems are exists?


-- 
Best regards,
Eugen Konkov





Re: Does 'instead of delete' trigger support modification of OLD

2020-01-17 Thread Eugen Konkov
Hello Bruce,

> Triggers are designed to check and modify input data, and since DELETE
> has no input data, it makes no sense.

Sorry,  I  am  still ambiguous. You say that DELETE has no input data,
but doc says that it has:

https://www.postgresql.org/docs/current/trigger-definition.html
For  a row-level trigger, the input data also includes ... the OLD row for ... 
DELETE triggers


Also  restricting  DELETE  to  change  the  returned  data  by  DELETE
RETURNING seems as incomplete.

For example if triggers implement some compression.
 -- insert into field Z value
 -- compress and actually store Zx5 into field
 -- Delete this insert row
 -- So user should get back that the value Z was deleted and not Zx5.
 Correct?

 but currently user will see Zx5, because next code:

 OLD.value =  uncompress( OLD.value );

 does not effect RETURNING =(

-- 
Best regards,
Eugen Konkov





Re: Does 'instead of delete' trigger support modification of OLD

2019-12-04 Thread Eugen Konkov
Hello Eugen,

> https://dbfiddle.uk/?rdbms=postgres_12=95ed9fab6870d7c4b6266ea4d93def13

sorry, forget to update link to the latest example:
https://dbfiddle.uk/?rdbms=postgres_12=8e114ccc9f15a30ca3115cdc6c70d247


-- 
Best regards,
Eugen Konkov





Re: Does 'instead of delete' trigger support modification of OLD

2019-12-04 Thread Eugen Konkov
Hi again.

> Thinking some more on this, I now don't think a TODO makes sense, so I
> have removed it.

Please look into this example: 
https://dbfiddle.uk/?rdbms=postgres_12=95ed9fab6870d7c4b6266ea4d93def13
This is real life code from our production.

You  can  see that this is important to get correct info about deleted
data

-- EXPECTED app_period: ["2018-08-20", "2018-08-25")
-- ACTUAL   app_period: ["2018-08-14", )

> Triggers are designed to check and modify input data, and since DELETE
> has no input data, it makes no sense.

Please   put   back   into   TODO  list this feature  request to allow
triggers to modify output data.

INPUT -- receives data OK (behavior is expected)
UPDATE  -- receives and returns data OK (behavior is expected)
DELETE  -- returns data   FAIL (behavior is not 
expected)

This  is  inconsistent  to  allow  modify  output  data for UPDATE and
restrict to do this for DELETE


Thank you

-- 
Best regards,
Eugen Konkov





Re: Does 'instead of delete' trigger support modification of OLD

2019-11-09 Thread Eugen Konkov
Hello Eugen,

Saturday, November 9, 2019, 2:05:02 PM, you wrote:

> Hello Bruce,

> Friday, November 8, 2019, 12:28:18 AM, you wrote:

>> On Thu, Nov  7, 2019 at 04:26:55PM -0500, Bruce Momjian wrote:
>>> On Thu, Nov  7, 2019 at 11:24:29AM +0200, Eugen Konkov wrote:
>>> > >> As far as allowing DELETE to modify the trigger row for RETURNING, I am
>>> > >> not sure how much work it would take to allow that, but it seems like 
>>> > >> it
>>> > >> is a valid requite, and if so, I can add it to the TODO list.
>>> > 
>>> > > Yes,  Add please into TODO the feature to "allowing DELETE to modify 
>>> > > the trigger row
>>> > > for  RETURNING".  Becuase, as I have described at first letter, without
>>> > > this the RETURNING rows **does not correspond actually deleted data**
>>> > 
>>> > > Thank you.
>>> 
>>> I have added a TODO item:
>>> 
>>>   Allow DELETE triggers to modify rows, for use by RETURNING 

>> Thinking some more on this, I now don't think a TODO makes sense, so I
>> have removed it.

>> Triggers are designed to check and modify input data, and since DELETE
>> has no input data, it makes no sense.  In the attached SQL script, you
>> can see that only the BEFORE INSERT trigger fires, so there is no way
>> even with INSERT to change what is passed after the write to RETURNING.
>> What you can do is to modify the returning expression, which is what I
>> have done for the last query --- hopefully that will help you.

> You  lost  my  idea.  First  of  all  I am talking about views and an
> INSTEAD OF triggers.

> INSERT/UPDATE operation present which data is added into DB
> DELETE operation present which data is deleted from DB
> (in  my  case I am not deleted exact that data which matched by where.
> See example below)

> Thus INSTEAD OF INSERT/UPDATE triggers are designed to check and modify input 
> data
> eg.  we can insert/update something different then incoming data (here
> we are modifying NEW)

> Thus INSTEAD OF DELETE triggers are designed to check and delete **output** 
> data
> eg. we can delete something different then underlaid data (here we are
> modifying OLD)

> for example, we have next data: 1 2 3 4 5 6 7 8
> it  is not presented by eight rows, but instead it is presented as one
> row with range data type: [1..8]

> When we insert data we will not get new row, we change current:
> insert into table values ( 9 ) will result
> [1..9]
> instead of
> [1..8]
> 9

> So  lets  look  into INSTEAD OF DELETE trigger when we deleting
> data:
> delete from table where x in ( 5, 6, 7 );
> after deleting this we should get:
> [1..4]
> [8..9]

> thus
> with t1 as ( delete from table where x in ( 5, 6, 7 ) returning * )
> select * from t1
> should return:
> [5..7]
> instead of
> [1..9]
> because we does not delete ALL [1..9], we just delete ONLY [5..7]

> Thus I need to change matched row OLD.x from [1..9] to [5..7]



> Please  reread  my  first  letter. There I describe more real life example
> when I am manipulating bi-temporal data.

> where some value exist at given period:
> id | app_period | value
> 7 [2019-01-01, 2019-04-05)207

> And I am deleting third month: [ 2019-03-01, 2019-04-01 )
> with  t1  as  (  delete  from  table where app_period && [ 2019-03-01,
> 2019-04-01 ) returning * )
> select * from t1;
> 7   [ 2019-03-01, 2019-04-01 )   207

> select * from table;
> 7   [ 2019-01-01, 2019-03-01 )   207
> 7   [ 2019-04-01, 2019-04-05 )   207

Here when data is deleted the next row is matched:
   7 [2019-01-01, 2019-04-05)207
and assigned to OLD.
Because I am deleting data ONLY from [ 2019-03-01, 2019-04-01 ) period
I am required to change OLD:

OLD.app_period = [ 2019-03-01, 2019-04-01 )

So I should get:
> 7   [ 2019-03-01, 2019-04-01 )   207
instead of
> 7 [2019-01-01, 2019-04-05)207





-- 
Best regards,
Eugen Konkov





Re: Does 'instead of delete' trigger support modification of OLD

2019-11-09 Thread Eugen Konkov
Hello Bruce,

Friday, November 8, 2019, 12:28:18 AM, you wrote:

> On Thu, Nov  7, 2019 at 04:26:55PM -0500, Bruce Momjian wrote:
>> On Thu, Nov  7, 2019 at 11:24:29AM +0200, Eugen Konkov wrote:
>> > >> As far as allowing DELETE to modify the trigger row for RETURNING, I am
>> > >> not sure how much work it would take to allow that, but it seems like it
>> > >> is a valid requite, and if so, I can add it to the TODO list.
>> > 
>> > > Yes,  Add please into TODO the feature to "allowing DELETE to modify the 
>> > > trigger row
>> > > for  RETURNING".  Becuase, as I have described at first letter, without
>> > > this the RETURNING rows **does not correspond actually deleted data**
>> > 
>> > > Thank you.
>> 
>> I have added a TODO item:
>> 
>>   Allow DELETE triggers to modify rows, for use by RETURNING 

> Thinking some more on this, I now don't think a TODO makes sense, so I
> have removed it.

> Triggers are designed to check and modify input data, and since DELETE
> has no input data, it makes no sense.  In the attached SQL script, you
> can see that only the BEFORE INSERT trigger fires, so there is no way
> even with INSERT to change what is passed after the write to RETURNING.
> What you can do is to modify the returning expression, which is what I
> have done for the last query --- hopefully that will help you.

You  lost  my  idea.  First  of  all  I am talking about views and an
INSTEAD OF triggers.

INSERT/UPDATE operation present which data is added into DB
DELETE operation present which data is deleted from DB
(in  my  case I am not deleted exact that data which matched by where.
See example below)

Thus INSTEAD OF INSERT/UPDATE triggers are designed to check and modify input 
data
eg.  we can insert/update something different then incoming data (here
we are modifying NEW)

Thus INSTEAD OF DELETE triggers are designed to check and delete **output** data
eg. we can delete something different then underlaid data (here we are
modifying OLD)

for example, we have next data: 1 2 3 4 5 6 7 8
it  is not presented by eight rows, but instead it is presented as one
row with range data type: [1..8]

When we insert data we will not get new row, we change current:
insert into table values ( 9 ) will result
[1..9]
instead of
[1..8]
9

So  lets  look  into INSTEAD OF DELETE trigger when we deleting
data:
delete from table where x in ( 5, 6, 7 );
after deleting this we should get:
[1..4]
[8..9]

thus
with t1 as ( delete from table where x in ( 5, 6, 7 ) returning * )
select * from t1
should return:
[5..7]
instead of
[1..9]
because we does not delete ALL [1..9], we just delete ONLY [5..7]

Thus I need to change matched row OLD.x from [1..9] to [5..7]



Please  reread  my  first  letter. There I describe more real life example
when I am manipulating bi-temporal data.

where some value exist at given period:
id | app_period | value
7 [2019-01-01, 2019-04-05)207

And I am deleting third month: [ 2019-03-01, 2019-04-01 )
with  t1  as  (  delete  from  table where app_period && [ 2019-03-01,
2019-04-01 ) returning * )
select * from t1;
7   [ 2019-03-01, 2019-04-01 )   207

select * from table;
7   [ 2019-01-01, 2019-03-01 )   207
7   [ 2019-04-01, 2019-04-05 )   207




-- 
Best regards,
Eugen Konkov





Re: Does 'instead of delete' trigger support modification of OLD

2019-11-07 Thread Eugen Konkov
Hello Eugen,

Thursday, November 7, 2019, 11:20:32 AM, you wrote:

>> I looked in the CREATE TRIGGER manual page and found this:

>> https://www.postgresql.org/docs/12/sql-createtrigger.html
>> If the trigger fires before or instead of the event, the trigger
>> can skip the operation for the current row, or change the row
>> being inserted (for INSERT and UPDATE operations only).

>> I don't see the "(for INSERT and UPDATE operations only)" language in
>> the main trigger documentation,
>> https://www.postgresql.org/docs/current/trigger-definition.html.  I have
>> written the attached patch to fix that.  Does that help?

> No.   If   we document that PG does not allow to modify OLD at instead
> of  trigger,  the  we can not implement that. Probably we can put note
> that  "currently  modification of the trigger row for RETURNING is not
> implemented"

sorry, typo. Please read:
"currently  modification of the trigger row for DELETE RETURNING is 
notimplemented"


>> As far as allowing DELETE to modify the trigger row for RETURNING, I am
>> not sure how much work it would take to allow that, but it seems like it
>> is a valid requite, and if so, I can add it to the TODO list.

> Yes,  Add please into TODO the feature to "allowing DELETE to modify the 
> trigger row
> for  RETURNING".  Becuase, as I have described at first letter, without
> this the RETURNING rows **does not correspond actually deleted data**

> Thank you.




-- 
Best regards,
Eugen Konkov





Re: Does 'instead of delete' trigger support modification of OLD

2019-11-07 Thread Eugen Konkov
> I looked in the CREATE TRIGGER manual page and found this:

> https://www.postgresql.org/docs/12/sql-createtrigger.html
> If the trigger fires before or instead of the event, the trigger
> can skip the operation for the current row, or change the row
> being inserted (for INSERT and UPDATE operations only).

> I don't see the "(for INSERT and UPDATE operations only)" language in
> the main trigger documentation,
> https://www.postgresql.org/docs/current/trigger-definition.html.  I have
> written the attached patch to fix that.  Does that help?

No.   If   we document that PG does not allow to modify OLD at instead
of  trigger,  the  we can not implement that. Probably we can put note
that  "currently  modification of the trigger row for RETURNING is not
implemented"

> As far as allowing DELETE to modify the trigger row for RETURNING, I am
> not sure how much work it would take to allow that, but it seems like it
> is a valid requite, and if so, I can add it to the TODO list.

Yes,  Add please into TODO the feature to "allowing DELETE to modify the 
trigger row
for  RETURNING".  Becuase, as I have described at first letter, without
this the RETURNING rows **does not correspond actually deleted data**

Thank you.

-- 
Best regards,
Eugen Konkov





Does 'instead of delete' trigger support modification of OLD

2019-10-29 Thread Eugen Konkov
Hi.

This is not clear from doc, so I have asked on IRC too.

from the DOC: https://www.postgresql.org/docs/current/trigger-definition.html
In the case of  INSTEAD OF triggers, the possibly-modified row returned by each 
trigger becomes the input to the next trigger

I modify OLD row, thus I expect to get modified version when run next query: 

WITH t1 AS( delete from abc returning *)
select * from t1;

fiddle: 
https://dbfiddle.uk/?rdbms=postgres_12=637730305f66bf531794edb09a462c95

> https://www.postgresql.org/docs/current/trigger-definition.html
A row-level INSTEAD OF trigger should either return NULL to indicate that it 
did not modify any data from the view's underlying base tables,
or it should return the view row that was passed in (the NEW row for INSERT and 
UPDATE operations, or the OLD row for DELETE operations).
A nonnull return value is used to signal that the trigger performed the 
necessary data modifications in the view.
This will cause the count of the number of rows affected by the command to be 
incremented. For INSERT and UPDATE operations, the trigger may
modify the NEW row before returning it. This will change the data returned by 
INSERT RETURNING or UPDATE RETURNING,
and is useful when the view will not show exactly the same data that was 
provided.

But I still does not understand. Doc explicitly do not prohibit modification of 
OLD and has no examples for DELETE RETURNING case

So I want to ask clarify doc a bit.
If this prohibited, why this is prohibited? have any discussion on this?
If not prohibited, does this is not implemented for DELETE RETURNING queries? 
if so, is it left for later?

I have next use case.
I am implementing Bi-Temporal tables. The table have columns: id, app_period, 
value
for example I have next data: 7, '[2019-01-01, 2020-01-01)', 130
You can imagine this as having value 7 for each day of the year.
Now I want to delete this value for May month. I setup special variable to 
period: '[2019-05-01,2019-06-01)' and then delete:

select app_period( '[2019-05-01,2019-06-01)' );
WITH t1 AS( delete from abc returning *)
select * from t1;

Algorithm of deletion is next:
1. Deactivate target row
   7, '[2019-01-01, 2020-01-01)', 130
2. If target row has wider app_period then we insert record that data back:
  NOT '[2019-05-01,2019-06-01)' @> '[2019-01-01, 2020-01-01)'
INSERT INTO abc ( id, app_period, value ) values 
( 7, '[2019-01-01,2019-05-01)', 130 ),
( 7, '[2019-06-01,2020-01-01)', 130 ),
3. OLD.app_period = OLD.app_period * app_period(); 
'[2019-01-01, 2020-01-01)' * '[2019-05-01,2019-06-01)' --> 
'[2019-05-01,2019-06-01)'

Because only 130 value is deleted from specified period I expect next result 
for the query above:
 ( 7, '[2019-05-01,2019-06-01)', 130 )

But despite on OLD was modified, actual result is:
 ( 7, '[2019-01-01,2020-01-01)', 130 )
You can see that this is original data.

So, does INSTEAD OF DELETE support modification of row?

-- 
Best regards,
Eugen Konkov





Re: Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'

2019-10-28 Thread Eugen Konkov
> x IS NOT DISTINCT FROM y

> I'm vaguely imagining

> x = {magic} y

> where unlike Eugen's suggestion, "=" is the real name of the underlying
> comparison operator.  For dump/restore this could be spelled verbosely
> as

> x OPERATOR(someplace.=) {magic} y

> The hard part is to figure out some {magic} annotation that is both
> short and unambiguous.  We have to cover the IS DISTINCT variant, too.

I  am  from  Perl  world.  There  are  == and != operators.
Here short snippet of code:

my $x = undef;
my $y = 'some value';
my $z = undef;
$x == $y; # FALSE
$x == $z; # TRUE
$x != $y ; # TRUE
$x != $z;  # FALSE


> x OPERATOR(someplace.=) {magic} y
If we should follow this form, then IS DISTINCT should be written as:
x =! y
This  looks unusual, because JavaScript also follow != form. so I hope
it  will be easy to detect/implement != form, which I used to read as:
negate the result of comparison



Can   we   supply   additional   parameters  to  OPERATOR  via  double
parentheses( double parentheses is another crazy idea)?
x =(( 'NULL' )) y

or

x OPERATOR(someplace.=, magic ) y
which   will  be  internally  converted(  I  suppose  )  to  OPERATOR(
someplace.=, x, y, magic )

-- 
Best regards,
Eugen Konkov





Proposition to use '==' as synonym for 'IS NOT DISTINCT FROM'

2019-10-26 Thread Eugen Konkov
Hi.

I  have  noticed that it would be cool to use '==' in place of 'IS NOT
DISTICT FROM'

What do you think about this crazy idea?

-- 
Best regards,
Eugen Konkov





Re: Request for improvement: Allow to push (t.o).id via GROUP BY ocd.o

2019-07-12 Thread Eugen Konkov
And, probably, next query belongs to same issue:

SELECT
--next_ots.group_cost AS next_cost,
(SELECT next_ots FROM order_total_suma( next_range ) next_ots 
WHERE next_ots.order_id = ots.order_id  AND next_ots.consumed_period @> 
(ots.o).billed_to
) AS next_suma,  -- << this takes 111ms only
ots.* FROM (
SELECT  
tstzrange(
  NULLIF( (ots.o).billed_to, 'infinity' ),
  NULLIF( (ots.o).billed_to +p.interval, 'infinity' )
) as next_range,
 ots.*
FROM order_total_suma() ots
LEFT JOIN period p ON p.id = (ots.o).period_id
) ots
--LEFT JOIN order_total_suma( next_range ) next_ots ON next_ots.order_id = 6154 
 --<< this is fine
--  AND next_ots.consumed_period @> (ots.o).billed_to 
--LEFT JOIN order_total_suma( next_range ) next_ots ON next_ots.order_id = 
ots.order_id  --<< this takes 11500ms
--  AND next_ots.consumed_period @> (ots.o).billed_to 
WHERE ots.order_id IN ( 6154, 10805 )


id is not pushed for LEFT JOIN


I have attached plans:



-- 
Best regards,
Eugen Konkov

plans
Description: Binary data


Re: Request for improvement: Allow to push (t.o).id via GROUP BY ocd.o

2019-07-12 Thread Eugen Konkov
Hello

to my mind I may be done, because `id` is primary key of `o` table

Friday, July 12, 2019, 1:04:27 PM, you wrote:

> Hello PostgreSQL-development,

> something's stopping the planner from being able to deduce that
> (t.o).id is safe to push through a GROUP BY ocd.o

> SELECT * FROM (
> SELECT
> sum( t.group_suma ) OVER( PARTITION BY t.id ) 
> AS total_suma,
> --  sum( t.group_suma ) OVER( PARTITION
> BY (t.o).id ) AS total_suma,  -- For any WHERE this takes 2700ms
> *
> FROM (
> SELECT
>  sum( ocd.item_cost ) AS group_cost,
>  sum( ocd.item_suma ) AS group_suma,
>  max( (ocd.ic).consumed ) AS consumed,
>  (ocd.ic).consumed_period,
>  ocd.o,
>  (ocd.o).id
> FROM order_cost_details( tstzrange(
> '2019-04-01', '2019-05-01' ) ) ocd
> GROUP BY ocd.o, (ocd.o).id, 
> (ocd.ic).consumed_period
> ) t
> ) t
> WHERE t.id = 6154 AND t.consumed_period @>
> '2019-04-01'::timestamptz   -- This takes 2ms
> --  WHERE (t.o).id = 6154 AND t.consumed_period @>
> '2019-04-01'::timestamptz   -- This takes 2700ms


> More info is here: https://stackoverflow.com/q/57003113/4632019




-- 
Best regards,
Eugen Konkov





Request for improvement: Allow to push (t.o).id via GROUP BY ocd.o

2019-07-12 Thread Eugen Konkov
Hello PostgreSQL-development,

something's stopping the planner from being able to deduce that (t.o).id is 
safe to push through a GROUP BY ocd.o

SELECT * FROM (
SELECT
sum( t.group_suma ) OVER( PARTITION BY t.id ) 
AS total_suma,
--  sum( t.group_suma ) OVER( PARTITION BY (t.o).id 
) AS total_suma,  -- For any WHERE this takes 2700ms
*
FROM (
SELECT
 sum( ocd.item_cost ) AS group_cost,
 sum( ocd.item_suma ) AS group_suma,
 max( (ocd.ic).consumed ) AS consumed,
 (ocd.ic).consumed_period,
 ocd.o,
 (ocd.o).id
FROM order_cost_details( tstzrange( 
'2019-04-01', '2019-05-01' ) ) ocd
GROUP BY ocd.o, (ocd.o).id, 
(ocd.ic).consumed_period
) t
) t
WHERE t.id = 6154 AND t.consumed_period @> '2019-04-01'::timestamptz
   -- This takes 2ms
--  WHERE (t.o).id = 6154 AND t.consumed_period @> 
'2019-04-01'::timestamptz   -- This takes 2700ms


More info is here: https://stackoverflow.com/q/57003113/4632019

-- 
Best regards,
Eugen Konkov





Re: BUG #15646: Inconsistent behavior for current_setting/set_config

2019-02-26 Thread Eugen Konkov
Hello

Documentation has no description how 'false' value for 'is_local' parameter 
interact with transaction



Do I understand correct?

https://www.postgresql.org/docs/11/functions-admin.html#FUNCTIONS-ADMIN-SET
>set_config sets the parameter setting_name to new_value. If is_local is true, 
>the new value will only apply to the current transaction. If you want the new 
>value to apply for the current session, use false instead. 

If I use 'false' then transaction will not have effect, because I set the value 
to session?

tucha=> select current_setting( 'my.app_period', true );
 current_setting 
-
 
(1 row)

tucha=> begin;
BEGIN
tucha=> select set_config( 'my.app_period', tstzrange( '-infinity', 'infinity' 
)::text, false );
  set_config  
--
 [-infinity,infinity)
(1 row)

tucha=> rollback;
ROLLBACK

NOTICE: session is rolled back and session value is rolled back despite on that 
I did not use 'true' as parameter for local:

tucha=> select current_setting( 'my.app_period', true );
 current_setting 
-
 
(1 row)

tucha=> begin;
BEGIN
tucha=> select set_config( 'my.app_period', tstzrange( '-infinity', 'infinity' 
)::text, false );
  set_config  
--
 [-infinity,infinity)
(1 row)

tucha=> commit;
COMMIT

When I commit then the value is applied to session:
tucha=> select current_setting( 'my.app_period', true );
   current_setting
--
 [-infinity,infinity)
(1 row)



-- 
Best regards,
Eugen Konkov




Re: BUG #15646: Inconsistent behavior for current_setting/set_config

2019-02-26 Thread Eugen Konkov
Hello,

Not sure I should open new issue or continue this one.

select set_config( 'my.some_conf', 'value', true );

does not issue warning if there is no transaction in progress.

I  faced into this problem when call to stored function which make use
of configurations. and missed that this function has no effect because
there is no transaction in progress

-- 
Best regards,
Eugen Konkov