Re: [HACKERS] Secondary index access optimizations

2017-11-06 Thread Konstantin Knizhnik

On 11/06/2017 04:27 AM, Thomas Munro wrote:

On Fri, Sep 8, 2017 at 3:58 AM, Konstantin Knizhnik
 wrote:

Updated version of the patch is attached to this mail.
Also I added support of date type to operator_predicate_proof to be able to
imply (logdate <= '2017-03-31') from (logdate < '2017-04-01') .

Hi Konstantin,

Is there any reason why you don't want to split this into two separate
proposals?  One for remove_restrictions_implied_by_constraints() and
one for the operator_predicate_proof() changes.

Your v3 patch breaks the new partition_join test (the recently
committed partition-wise join stuff), as far as I can tell in a good
way.  Can you please double check those changes and post an updated
patch?


Hi Thomas.

The primary idea of this patch was to provide more efficient plans for queries 
on partitioned tables.
So remove_restrictions_implied_by_constraints() removes redundant predicate 
checks.
But it doesn't work for standard Postgres 10 partitioning, because here 
constraints are set using intervals with open high boundary and original 
version of
operator_predicate_proof() is not able to handle this case.

I have explained this problem in my previous e-mails in this thread.
This is why I have changed operator_predicate_proof() to correctly handle this 
case.

If you think this patch should be splitted into two, ok: I can do it.
I just want to notice that without patching operator_predicate_proof() it may 
give not positive effect for standard partitioning,
which I expect to be the most popular use case where this optimization may have 
an effect.


Concerning broken partition_join test: it is "expected" failure: my patch 
removes from the plans redundant checks.
So the only required action is to update expected file with results.
Attached please find updated patch.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 4339bbf..0931af1 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -626,12 +626,12 @@ EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE c1 IS NULL;-- Nu
Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" WHERE (("C 1" IS NULL))
 (3 rows)
 
-EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE c1 IS NOT NULL;-- NullTest
- QUERY PLAN  
--
+EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE c1 IS NOT NULL and c3 is not null;-- NullTest
+QUERY PLAN
+--
  Foreign Scan on public.ft1 t1
Output: c1, c2, c3, c4, c5, c6, c7, c8
-   Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" WHERE (("C 1" IS NOT NULL))
+   Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" WHERE ((c3 IS NOT NULL))
 (3 rows)
 
 EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE round(abs(c1), 0) = 1; -- FuncExpr
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index ddfec79..878bfc7 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -292,7 +292,7 @@ RESET enable_nestloop;
 EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE t1.c1 = 1; -- Var, OpExpr(b), Const
 EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE t1.c1 = 100 AND t1.c2 = 0; -- BoolExpr
 EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE c1 IS NULL;-- NullTest
-EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE c1 IS NOT NULL;-- NullTest
+EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE c1 IS NOT NULL and c3 is not null;-- NullTest
 EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE round(abs(c1), 0) = 1; -- FuncExpr
 EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE c1 = -c1;  -- OpExpr(l)
 EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE 1 = c1!;   -- OpExpr(r)
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index a5c1b68..082d1cc 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -346,6 +346,7 @@ set_rel_size(PlannerInfo *root, RelOptInfo *rel,
 		switch (rel->rtekind)
 		{
 			case RTE_RELATION:
+remove_restrictions_implied_by_constraints(root, rel, rte);
 if (rte->relkind == RELKIND_FOREIGN_TABLE)
 {
 	/* Foreign table */
@@ -1312,6 +1313,7 @@ set_append_rel_size(PlannerInfo *root, RelOptInfo *rel,
 			set_dummy_rel_pathlist(childrel);
 			

Re: [HACKERS] Secondary index access optimizations

2017-11-05 Thread Thomas Munro
On Fri, Sep 8, 2017 at 3:58 AM, Konstantin Knizhnik
 wrote:
> Updated version of the patch is attached to this mail.
> Also I added support of date type to operator_predicate_proof to be able to
> imply (logdate <= '2017-03-31') from (logdate < '2017-04-01') .

Hi Konstantin,

Is there any reason why you don't want to split this into two separate
proposals?  One for remove_restrictions_implied_by_constraints() and
one for the operator_predicate_proof() changes.

Your v3 patch breaks the new partition_join test (the recently
committed partition-wise join stuff), as far as I can tell in a good
way.  Can you please double check those changes and post an updated
patch?

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Secondary index access optimizations

2017-09-07 Thread Konstantin Knizhnik



On 07.09.2017 13:00, Thomas Munro wrote:

On Sun, Sep 3, 2017 at 4:34 AM, Konstantin Knizhnik
 wrote:

Thank you for review.
I attached new version of the patch with
remove_restrictions_implied_by_constraints() function.
Concerning failed tests - this is actually result of this optimization:
extra filter conditions are removed from query plans.
Sorry, I have not included updated version of expected test output files to
the patch.
Now I did it.

A regression test under contrib/postgres_fdw now fails:

- Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T
1" WHERE (("C 1" IS NOT NULL))
+ Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1"

(("C 1" IS NOT NULL)) is indeed redundant in that case, because column
"C 1" was declared to be NOT NULL.  But:

1.  Do we want to go this far?  Note that this is not involving
inheritance and constraint exclusion.  I don't immediately see any
reason why not, but I'm not sure.

2.  If yes, then this postgres_fdw test should be changed, because I
think it was trying to demonstrate that IS NOT NULL expressions are
sent to remote databases -- it would need to be changed so that it
tries that with a column that is actually nullable.

I do not see any reasons why we should disable this optimization in case 
of FDW.

And disabling it requires some extra efforts...

So I have updated test for postgres_fdw, replacing query
 SELECT * FROM ft1 t1 WHERE c1 IS NOT NULL;
with
 SELECT * FROM ft1 t1 WHERE c1 IS NOT NULL and c3 is not null;

Now it checks two things:
1. That not null check is passed to foreign server for nullable column (c3)
2. That not null check is excluded from query execution plan when it can 
be omitted because column is not nullable.


Updated version of the patch is attached to this mail.
Also I added support of date type to operator_predicate_proof to be able 
to imply (logdate <= '2017-03-31') from (logdate < '2017-04-01') .


--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index c19b331..a9cce14 100644
--- a/contrib/postgres_fdw/expected/postgres_fdw.out
+++ b/contrib/postgres_fdw/expected/postgres_fdw.out
@@ -626,12 +626,12 @@ EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE c1 IS NULL;-- Nu
Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" WHERE (("C 1" IS NULL))
 (3 rows)
 
-EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE c1 IS NOT NULL;-- NullTest
- QUERY PLAN  
--
+EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE c1 IS NOT NULL and c3 is not null;-- NullTest
+QUERY PLAN
+--
  Foreign Scan on public.ft1 t1
Output: c1, c2, c3, c4, c5, c6, c7, c8
-   Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" WHERE (("C 1" IS NOT NULL))
+   Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1" WHERE ((c3 IS NOT NULL))
 (3 rows)
 
 EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE round(abs(c1), 0) = 1; -- FuncExpr
diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql
index 5f65d9d..2d816db 100644
--- a/contrib/postgres_fdw/sql/postgres_fdw.sql
+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
@@ -292,7 +292,7 @@ RESET enable_nestloop;
 EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE t1.c1 = 1; -- Var, OpExpr(b), Const
 EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE t1.c1 = 100 AND t1.c2 = 0; -- BoolExpr
 EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE c1 IS NULL;-- NullTest
-EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE c1 IS NOT NULL;-- NullTest
+EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE c1 IS NOT NULL and c3 is not null;-- NullTest
 EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE round(abs(c1), 0) = 1; -- FuncExpr
 EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE c1 = -c1;  -- OpExpr(l)
 EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE 1 = c1!;   -- OpExpr(r)
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index 2d7e1d8..5de67ce 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -344,6 +344,7 @@ set_rel_size(PlannerInfo *root, RelOptInfo *rel,
 		switch (rel->rtekind)
 		{
 			case RTE_RELATION:
+remove_restrictions_implied_by_constraints(root, rel, rte);
 if (rte->relkind == RELKIND_FOREIGN_TABLE)
 {
 	/* 

Re: [HACKERS] Secondary index access optimizations

2017-09-07 Thread Thomas Munro
On Sun, Sep 3, 2017 at 4:34 AM, Konstantin Knizhnik
 wrote:
> Thank you for review.
> I attached new version of the patch with
> remove_restrictions_implied_by_constraints() function.
> Concerning failed tests - this is actually result of this optimization:
> extra filter conditions are removed from query plans.
> Sorry, I have not included updated version of expected test output files to
> the patch.
> Now I did it.

A regression test under contrib/postgres_fdw now fails:

- Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T
1" WHERE (("C 1" IS NOT NULL))
+ Remote SQL: SELECT "C 1", c2, c3, c4, c5, c6, c7, c8 FROM "S 1"."T 1"

(("C 1" IS NOT NULL)) is indeed redundant in that case, because column
"C 1" was declared to be NOT NULL.  But:

1.  Do we want to go this far?  Note that this is not involving
inheritance and constraint exclusion.  I don't immediately see any
reason why not, but I'm not sure.

2.  If yes, then this postgres_fdw test should be changed, because I
think it was trying to demonstrate that IS NOT NULL expressions are
sent to remote databases -- it would need to be changed so that it
tries that with a column that is actually nullable.

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Secondary index access optimizations

2017-09-05 Thread Konstantin Knizhnik



On 05.09.2017 04:02, Amit Langote wrote:

Like Thomas, I'm not so sure about the whole predtest.c patch.  The core
logic in operator_predicate_proof() should be able to conclude that, say,
k < 21 is implied by k <= 20, which you are trying to address with some
special case code.  If there is still problem you think need to be fixed
here, a better place to look at would be somewhere around get_btree_test_op().


Frankly speaking I also do not like this part of my patch.
I will be pleased if you or somebody else can propose better solution.
I do not understand how get_btree_test_op() can help here.

Yes, k < 21 is implied by k <= 20. It is based on generic properties of 
< and  <= operators.
But I need to proof something different: having table partition 
constraint (k < 21) I want to remove predicate (k <= 20) from query.
In other words,  operator_predicate_proof() should be able to conclude 
that (k <= 20) is implied by (k < 21).
But it is true only for integer types, not for floating point types. And 
Postgres operator definition
doesn't provide some way to determine that user defined type is integer 
type: has integer values for which such conclusion is true.


Why I think that it is important? Certainly, it is possible to rewrite 
query as (k < 21) and no changes in operator_predicate_proof() are needed.
Assume the most natural use case: I have some positive integer key and I 
wan to range partition table by such key, for example with interval 1.
Currently standard PostgreSQL partitioning mechanism requires to specify 
intervals with open high boundary.
So if I want first partition to contain interval [1,1], second - 
[10001,20001],... I have to create partitions in such way:


create table bt (k integer, v integer) partition by range (k);
create table dt1 partition of bt for values from (1) to (10001);
create table dt2 partition of bt for values from (10001) to (20001);
...

If I want to write query inspecting data of the particular partition, 
then most likely I will use BETWEEN operator:


SELECT * FROM t WHERE k BETWEEN 1 and 1;

But right now operator_predicate_proof() is not able to conclude that 
predicate (k BETWEEN 1 and 1) transformed to (k >= 1 AND k <= 1) 
is equivalent to (k >= 1 AND k < 10001)

which is used as partition constraint.

Another very popular use case (for example mentioned in PostgreSQL 
documentation of partitioning: 
https://www.postgresql.org/docs/10/static/ddl-partitioning.html)

is using date as partition key:

CREATE TABLE measurement (
city_id int not null,
logdate date not null,
peaktempint,
unitsales   int
) PARTITION BY RANGE (logdate);


CREATE TABLE measurement_y2006m03 PARTITION OF measurement
FOR VALUES FROM ('2006-03-01') TO ('2006-04-01')


Assume that now I want to get measurements for March:

There are three ways to write this query:

select * from measurement where extract(month from logdate) = 3;
select * from measurement where logdate between '2006-03-01' AND 
'2006-03-31';
select * from measurement where logdate >= '2006-03-01' AND logdate  < 
'2006-04-01';


Right now only for the last query optimal query plan will be constructed.
Unfortunately my patch is not covering date type.



--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [HACKERS] Secondary index access optimizations

2017-09-04 Thread Amit Langote
On 2017/09/04 20:46, Konstantin Knizhnik wrote:
> On 04.09.2017 12:59, Amit Langote wrote:
>> On 2017/09/04 18:19, Konstantin Knizhnik wrote:
>>> Concerning your suggestion to merge check_index_predicates() and
>>> remove_restrictions_implied_by_constraints() functions: may be it can be
>>> done, but frankly speaking I do not see much sense in it - there are too
>>> much differences between this functions and too few code reusing.
>> Maybe, you meant to address Thomas here. :)  Reading his comment again, I
>> too am a bit concerned about destructively modifying the input rel's
>> baserestrictinfo.  There should at least be a comment that that's being
>> done.
>
> But I have considered Thomas comment and extracted code updating
> relation's baserestrictinfo from
> relation_excluded_by_constraints() to
> remove_restrictions_implied_by_constraints() function. It was included in
> new version of the patch.

Sorry, I hadn't noticed the new patch.

I was confused because I didn't suggest "to merge check_index_predicates()
and remove_restrictions_implied_by_constraints() functions".  Perhaps, the
wording in my previous message wasn't clear.

Looking at the new patch, the changes regarding
remove_restrictions_implied_by_constraints() look fine.

Like Thomas, I'm not so sure about the whole predtest.c patch.  The core
logic in operator_predicate_proof() should be able to conclude that, say,
k < 21 is implied by k <= 20, which you are trying to address with some
special case code.  If there is still problem you think need to be fixed
here, a better place to look at would be somewhere around get_btree_test_op().

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Secondary index access optimizations

2017-09-04 Thread Konstantin Knizhnik



On 04.09.2017 12:59, Amit Langote wrote:

Hi Konstantin,

On 2017/09/04 18:19, Konstantin Knizhnik wrote:

On 04.09.2017 05:38, Amit Langote wrote:

On 2017/09/02 12:44, Thomas Munro wrote:

On Wed, Aug 16, 2017 at 9:23 PM, Konstantin Knizhnik
 wrote:

postgres=# explain select * from bt where k between 1 and 2 and v
= 100;
QUERY PLAN
--
   Append  (cost=0.29..15.63 rows=2 width=8)
 ->  Index Scan using dti1 on dt1  (cost=0.29..8.30 rows=1 width=8)
   Index Cond: (v = 100)
 ->  Index Scan using dti2 on dt2  (cost=0.29..7.33 rows=1 width=8)
   Index Cond: (v = 100)
   Filter: (k <= 2)
(6 rows)

+1

This seems like a good feature to me: filtering stuff that is
obviously true is a waste of CPU cycles and may even require people to
add redundant stuff to indexes.  I was pondering something related to
this over in the partition-wise join thread (join quals that are
implied by partition constraints and should be discarded).

It'd be interesting to get Amit Langote's feedback, so I CC'd him.
I'd be surprised if he and others haven't got a plan or a patch for
this down the back of the sofa.

I agree that that's a good optimization in the cases it's correct.  Given
that check_index_predicates() already applies the same optimization when
considering using a partial index, it might make sense to try to do the
same even earlier for the table itself using its CHECK / NOT NULL
constraints as predicates (I said *earlier* because
relation_excluded_by_constrains happens for a relation before we look at
its indexes).  Also, at the end of relation_excluded_by_constraints() may
not be such a bad place to do this.

By the way, I read in check_index_predicates() that we should not apply
this optimization if the relation in question is a target of UPDATE /
DELETE / SELECT FOR UPDATE.

Please correct me if I wrong, but it seems to me that in case of table
constraints it is not necessary to specially handle update case.
As far as I understand we need to leave predicate in the plan in case of
partial indexes because due to "read committed" isolation policy
we may need to recheck that tuple still satisfies update condition (tuple
can be changed by some other committed transaction while we are waiting
for it and not satisfying this condition any more).
But no transaction can change tuple in such way that it violates table
constraints,  right? So we do not need to recheck it.

Actually, I don't really know why check_index_predicates() skips this
optimization in the target relation case, just wanted to point out that
that's so.

Thinking a bit from what you wrote, maybe we need not worry about
EvalPlanQual in the context of your proposed optimization based on the
table's constraints.


Concerning your suggestion to merge check_index_predicates() and
remove_restrictions_implied_by_constraints() functions: may be it can be
done, but frankly speaking I do not see much sense in it - there are too
much differences between this functions and too few code reusing.

Maybe, you meant to address Thomas here. :)  Reading his comment again, I
too am a bit concerned about destructively modifying the input rel's
baserestrictinfo.  There should at least be a comment that that's being done.
But I have considered Thomas comment and extracted code updating 
relation's baserestrictinfo from
relation_excluded_by_constraints() to 
remove_restrictions_implied_by_constraints() function. It was included 
in new version of the patch.


--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Secondary index access optimizations

2017-09-04 Thread Amit Langote
Hi Konstantin,

On 2017/09/04 18:19, Konstantin Knizhnik wrote:
> On 04.09.2017 05:38, Amit Langote wrote:
>> On 2017/09/02 12:44, Thomas Munro wrote:
>>> On Wed, Aug 16, 2017 at 9:23 PM, Konstantin Knizhnik
>>>  wrote:
 postgres=# explain select * from bt where k between 1 and 2 and v
 = 100;
    QUERY PLAN
 --
   Append  (cost=0.29..15.63 rows=2 width=8)
     ->  Index Scan using dti1 on dt1  (cost=0.29..8.30 rows=1 width=8)
   Index Cond: (v = 100)
     ->  Index Scan using dti2 on dt2  (cost=0.29..7.33 rows=1 width=8)
   Index Cond: (v = 100)
   Filter: (k <= 2)
 (6 rows)
>>> +1
>>>
>>> This seems like a good feature to me: filtering stuff that is
>>> obviously true is a waste of CPU cycles and may even require people to
>>> add redundant stuff to indexes.  I was pondering something related to
>>> this over in the partition-wise join thread (join quals that are
>>> implied by partition constraints and should be discarded).
>>>
>>> It'd be interesting to get Amit Langote's feedback, so I CC'd him.
>>> I'd be surprised if he and others haven't got a plan or a patch for
>>> this down the back of the sofa.
>> I agree that that's a good optimization in the cases it's correct.  Given
>> that check_index_predicates() already applies the same optimization when
>> considering using a partial index, it might make sense to try to do the
>> same even earlier for the table itself using its CHECK / NOT NULL
>> constraints as predicates (I said *earlier* because
>> relation_excluded_by_constrains happens for a relation before we look at
>> its indexes).  Also, at the end of relation_excluded_by_constraints() may
>> not be such a bad place to do this.
>>
>> By the way, I read in check_index_predicates() that we should not apply
>> this optimization if the relation in question is a target of UPDATE /
>> DELETE / SELECT FOR UPDATE.
>
> Please correct me if I wrong, but it seems to me that in case of table
> constraints it is not necessary to specially handle update case.
> As far as I understand we need to leave predicate in the plan in case of
> partial indexes because due to "read committed" isolation policy
> we may need to recheck that tuple still satisfies update condition (tuple
> can be changed by some other committed transaction while we are waiting
> for it and not satisfying this condition any more).
> But no transaction can change tuple in such way that it violates table
> constraints,  right? So we do not need to recheck it.

Actually, I don't really know why check_index_predicates() skips this
optimization in the target relation case, just wanted to point out that
that's so.

Thinking a bit from what you wrote, maybe we need not worry about
EvalPlanQual in the context of your proposed optimization based on the
table's constraints.

> Concerning your suggestion to merge check_index_predicates() and
> remove_restrictions_implied_by_constraints() functions: may be it can be
> done, but frankly speaking I do not see much sense in it - there are too
> much differences between this functions and too few code reusing.

Maybe, you meant to address Thomas here. :)  Reading his comment again, I
too am a bit concerned about destructively modifying the input rel's
baserestrictinfo.  There should at least be a comment that that's being done.

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Secondary index access optimizations

2017-09-04 Thread Konstantin Knizhnik



On 04.09.2017 05:38, Amit Langote wrote:

On 2017/09/02 12:44, Thomas Munro wrote:

On Wed, Aug 16, 2017 at 9:23 PM, Konstantin Knizhnik
 wrote:

postgres=# explain select * from bt where k between 1 and 2 and v = 100;
   QUERY PLAN
--
  Append  (cost=0.29..15.63 rows=2 width=8)
->  Index Scan using dti1 on dt1  (cost=0.29..8.30 rows=1 width=8)
  Index Cond: (v = 100)
->  Index Scan using dti2 on dt2  (cost=0.29..7.33 rows=1 width=8)
  Index Cond: (v = 100)
  Filter: (k <= 2)
(6 rows)

+1

This seems like a good feature to me: filtering stuff that is
obviously true is a waste of CPU cycles and may even require people to
add redundant stuff to indexes.  I was pondering something related to
this over in the partition-wise join thread (join quals that are
implied by partition constraints and should be discarded).

It'd be interesting to get Amit Langote's feedback, so I CC'd him.
I'd be surprised if he and others haven't got a plan or a patch for
this down the back of the sofa.

I agree that that's a good optimization in the cases it's correct.  Given
that check_index_predicates() already applies the same optimization when
considering using a partial index, it might make sense to try to do the
same even earlier for the table itself using its CHECK / NOT NULL
constraints as predicates (I said *earlier* because
relation_excluded_by_constrains happens for a relation before we look at
its indexes).  Also, at the end of relation_excluded_by_constraints() may
not be such a bad place to do this.

By the way, I read in check_index_predicates() that we should not apply
this optimization if the relation in question is a target of UPDATE /
DELETE / SELECT FOR UPDATE.
Please correct me if I wrong, but it seems to me that in case of table 
constraints it is not necessary to specially handle update case.
As far as I understand we need to leave predicate in the plan in case of 
partial indexes because due to "read committed" isolation policy
we may need to recheck that tuple still satisfies update condition 
(tuple can be changed by some other committed transaction while we are 
waiting for it and not satisfying this condition any more).
But no transaction can change tuple in such way that it violates table 
constraints,  right? So we do not need to recheck it.


Concerning your suggestion to merge check_index_predicates() and 
remove_restrictions_implied_by_constraints() functions: may be it can be 
done, but frankly speaking I do not see much sense in it - there are too 
much differences between this functions and too few code reusing.


--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Secondary index access optimizations

2017-09-03 Thread Amit Langote
On 2017/09/02 12:44, Thomas Munro wrote:
> On Wed, Aug 16, 2017 at 9:23 PM, Konstantin Knizhnik
>  wrote:
>> postgres=# explain select * from bt where k between 1 and 2 and v = 100;
>>   QUERY PLAN
>> --
>>  Append  (cost=0.29..15.63 rows=2 width=8)
>>->  Index Scan using dti1 on dt1  (cost=0.29..8.30 rows=1 width=8)
>>  Index Cond: (v = 100)
>>->  Index Scan using dti2 on dt2  (cost=0.29..7.33 rows=1 width=8)
>>  Index Cond: (v = 100)
>>  Filter: (k <= 2)
>> (6 rows)
> 
> +1
> 
> This seems like a good feature to me: filtering stuff that is
> obviously true is a waste of CPU cycles and may even require people to
> add redundant stuff to indexes.  I was pondering something related to
> this over in the partition-wise join thread (join quals that are
> implied by partition constraints and should be discarded).
> 
> It'd be interesting to get Amit Langote's feedback, so I CC'd him.
> I'd be surprised if he and others haven't got a plan or a patch for
> this down the back of the sofa.

I agree that that's a good optimization in the cases it's correct.  Given
that check_index_predicates() already applies the same optimization when
considering using a partial index, it might make sense to try to do the
same even earlier for the table itself using its CHECK / NOT NULL
constraints as predicates (I said *earlier* because
relation_excluded_by_constrains happens for a relation before we look at
its indexes).  Also, at the end of relation_excluded_by_constraints() may
not be such a bad place to do this.

By the way, I read in check_index_predicates() that we should not apply
this optimization if the relation in question is a target of UPDATE /
DELETE / SELECT FOR UPDATE.

Thanks,
Amit



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Secondary index access optimizations

2017-09-02 Thread Konstantin Knizhnik

On 09/02/2017 06:44 AM, Thomas Munro wrote:

On Wed, Aug 16, 2017 at 9:23 PM, Konstantin Knizhnik
 wrote:

postgres=# explain select * from bt where k between 1 and 2 and v = 100;
   QUERY PLAN
--
  Append  (cost=0.29..15.63 rows=2 width=8)
->  Index Scan using dti1 on dt1  (cost=0.29..8.30 rows=1 width=8)
  Index Cond: (v = 100)
->  Index Scan using dti2 on dt2  (cost=0.29..7.33 rows=1 width=8)
  Index Cond: (v = 100)
  Filter: (k <= 2)
(6 rows)

+1

This seems like a good feature to me: filtering stuff that is
obviously true is a waste of CPU cycles and may even require people to
add redundant stuff to indexes.  I was pondering something related to
this over in the partition-wise join thread (join quals that are
implied by partition constraints and should be discarded).

It'd be interesting to get Amit Langote's feedback, so I CC'd him.
I'd be surprised if he and others haven't got a plan or a patch for
this down the back of the sofa.

I might be missing some higher level architectural problems with the
patch, but for what it's worth here's some feedback after a first read
through:

--- a/src/backend/optimizer/util/plancat.c
+++ b/src/backend/optimizer/util/plancat.c
@@ -1441,6 +1441,20 @@ relation_excluded_by_constraints(PlannerInfo *root,
  if (predicate_refuted_by(safe_constraints, rel->baserestrictinfo, false))
  return true;

+/*
+ * Remove from restrictions list items implied by table constraints
+ */
+safe_restrictions = NULL;
+foreach(lc, rel->baserestrictinfo)
+{
+RestrictInfo *rinfo = (RestrictInfo *) lfirst(lc);

I think the new way to write this is "RestrictInfo *rinfo =
lfirst_node(RestrictInfo, lc)".

+if (!predicate_implied_by(list_make1(rinfo->clause),
safe_constraints, false)) {
+safe_restrictions = lappend(safe_restrictions, rinfo);
+}
+}
+rel->baserestrictinfo = safe_restrictions;

It feels wrong to modify rel->baserestrictinfo in
relation_excluded_by_constraints().  I think there should probably be
a function with a name that more clearly indicates that it mutates its
input, and you should call that separately after
relation_excluded_by_constraints().  Something like
remove_restrictions_implied_by_constraints()?


It is because operator_predicate_proof is not able to understand that k <
20001 and k <= 2 are equivalent for integer type.

[...]

  /*
   * operator_predicate_proof
  if (clause_const->constisnull)
  return false;

+if (!refute_it
+&& ((pred_op == Int4LessOrEqualOperator && clause_op ==
Int4LessOperator)
+|| (pred_op == Int8LessOrEqualOperator && clause_op ==
Int8LessOperator)
+|| (pred_op == Int2LessOrEqualOperator && clause_op ==
Int2LessOperator))
+&& pred_const->constbyval && clause_const->constbyval
+&& pred_const->constvalue + 1 == clause_const->constvalue)
+{
+return true;
+}
+

I'm less sure about this part.  It seems like a slippery slope.

A couple of regression test failures:

  inherit  ... FAILED
  rowsecurity  ... FAILED

  2 of 179 tests failed.


I didn't try to understand the rowsecurity one, but at first glance
all the differences reported in the inherit test are in fact cases
where your patch is doing the right thing and removing redundant
filters from scans.  Nice!


Thank you for review.
I attached new version of the patch with 
remove_restrictions_implied_by_constraints() function.
Concerning failed tests - this is actually result of this optimization: extra 
filter conditions are removed from query plans.
Sorry, I have not included updated version of expected test output files to the 
patch.
Now I did it.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index 2d7e1d8..5de67ce 100644
--- a/src/backend/optimizer/path/allpaths.c
+++ b/src/backend/optimizer/path/allpaths.c
@@ -344,6 +344,7 @@ set_rel_size(PlannerInfo *root, RelOptInfo *rel,
 		switch (rel->rtekind)
 		{
 			case RTE_RELATION:
+remove_restrictions_implied_by_constraints(root, rel, rte);
 if (rte->relkind == RELKIND_FOREIGN_TABLE)
 {
 	/* Foreign table */
@@ -1047,6 +1048,7 @@ set_append_rel_size(PlannerInfo *root, RelOptInfo *rel,
 			set_dummy_rel_pathlist(childrel);
 			continue;
 		}
+		remove_restrictions_implied_by_constraints(root, childrel, childRTE);
 
 		/*
 		 * CE failed, so finish copying/modifying targetlist and join quals.
diff --git a/src/backend/optimizer/util/plancat.c b/src/backend/optimizer/util/plancat.c
index a1ebd4a..fc663d6 100644
--- a/src/backend/optimizer/util/plancat.c
+++ 

Re: [HACKERS] Secondary index access optimizations

2017-09-01 Thread Thomas Munro
On Wed, Aug 16, 2017 at 9:23 PM, Konstantin Knizhnik
 wrote:
> postgres=# explain select * from bt where k between 1 and 2 and v = 100;
>   QUERY PLAN
> --
>  Append  (cost=0.29..15.63 rows=2 width=8)
>->  Index Scan using dti1 on dt1  (cost=0.29..8.30 rows=1 width=8)
>  Index Cond: (v = 100)
>->  Index Scan using dti2 on dt2  (cost=0.29..7.33 rows=1 width=8)
>  Index Cond: (v = 100)
>  Filter: (k <= 2)
> (6 rows)

+1

This seems like a good feature to me: filtering stuff that is
obviously true is a waste of CPU cycles and may even require people to
add redundant stuff to indexes.  I was pondering something related to
this over in the partition-wise join thread (join quals that are
implied by partition constraints and should be discarded).

It'd be interesting to get Amit Langote's feedback, so I CC'd him.
I'd be surprised if he and others haven't got a plan or a patch for
this down the back of the sofa.

I might be missing some higher level architectural problems with the
patch, but for what it's worth here's some feedback after a first read
through:

--- a/src/backend/optimizer/util/plancat.c
+++ b/src/backend/optimizer/util/plancat.c
@@ -1441,6 +1441,20 @@ relation_excluded_by_constraints(PlannerInfo *root,
 if (predicate_refuted_by(safe_constraints, rel->baserestrictinfo, false))
 return true;

+/*
+ * Remove from restrictions list items implied by table constraints
+ */
+safe_restrictions = NULL;
+foreach(lc, rel->baserestrictinfo)
+{
+RestrictInfo *rinfo = (RestrictInfo *) lfirst(lc);

I think the new way to write this is "RestrictInfo *rinfo =
lfirst_node(RestrictInfo, lc)".

+if (!predicate_implied_by(list_make1(rinfo->clause),
safe_constraints, false)) {
+safe_restrictions = lappend(safe_restrictions, rinfo);
+}
+}
+rel->baserestrictinfo = safe_restrictions;

It feels wrong to modify rel->baserestrictinfo in
relation_excluded_by_constraints().  I think there should probably be
a function with a name that more clearly indicates that it mutates its
input, and you should call that separately after
relation_excluded_by_constraints().  Something like
remove_restrictions_implied_by_constraints()?

> It is because operator_predicate_proof is not able to understand that k <
> 20001 and k <= 2 are equivalent for integer type.
>
> [...]
>
>  /*
>   * operator_predicate_proof
>  if (clause_const->constisnull)
>  return false;
>
> +if (!refute_it
> +&& ((pred_op == Int4LessOrEqualOperator && clause_op ==
> Int4LessOperator)
> +|| (pred_op == Int8LessOrEqualOperator && clause_op ==
> Int8LessOperator)
> +|| (pred_op == Int2LessOrEqualOperator && clause_op ==
> Int2LessOperator))
> +&& pred_const->constbyval && clause_const->constbyval
> +&& pred_const->constvalue + 1 == clause_const->constvalue)
> +{
> +return true;
> +}
> +

I'm less sure about this part.  It seems like a slippery slope.

A couple of regression test failures:

 inherit  ... FAILED
 rowsecurity  ... FAILED

 2 of 179 tests failed.


I didn't try to understand the rowsecurity one, but at first glance
all the differences reported in the inherit test are in fact cases
where your patch is doing the right thing and removing redundant
filters from scans.  Nice!

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Secondary index access optimizations

2017-08-16 Thread Konstantin Knizhnik

On 14.08.2017 19:33, Konstantin Knizhnik wrote:



On 14.08.2017 12:37, Konstantin Knizhnik wrote:

Hi hackers,

I am trying to compare different ways of optimizing work with huge 
append-only tables in PostgreSQL where primary key is something like 
timestamp and queries are usually accessing most recent data using 
some secondary keys. Size of secondary index is one of the most 
critical factors limiting  insert/search performance. As far as data 
is inserted in timestamp ascending order, access to primary key is 
well localized and accessed tables are present in memory. But if we 
create secondary key for the whole table, then access to it will 
require random reads from the disk and significantly decrease 
performance.


There are two well known solutions of the problem:
1. Table partitioning
2. Partial indexes

This approaches I want to compare. First of all I want to check if 
optimizer is able to generate efficient query execution plan covering 
different time intervals.

Unfortunately in both cases generated plan is not optimal.

1. Table partitioning:

create table base (k integer primary key, v integer);
create table part1 (check (k between 1 and 1)) inherits (base);
create table part2 (check (k between 10001 and 2)) inherits (base);
create index pi1 on part1(v);
create index pi2 on part2(v);
insert int part1 values (generate series(1,1), random());
insert into part2 values (generate_series(10001,2), random());
explain select * from base where k between 1 and 2 and v = 100;
  QUERY PLAN
---
 Append  (cost=0.00..15.65 rows=3 width=8)
   ->  Seq Scan on base  (cost=0.00..0.00 rows=1 width=8)
 Filter: ((k >= 1) AND (k <= 2) AND (v = 100))
   ->  Index Scan using pi1 on part1  (cost=0.29..8.31 rows=1 width=8)
 Index Cond: (v = 100)
 Filter: ((k >= 1) AND (k <= 2))
   ->  Index Scan using pi2 on part2  (cost=0.29..7.34 rows=1 width=8)
 Index Cond: (v = 100)
 Filter: ((k >= 1) AND (k <= 2))

Questions:
- Is there some way to avoid sequential scan of parent table? Yes, it 
is empty and so sequential scan will not take much time, but ... it 
still requires some additional actions and so increasing query 
execution time.
- Why index scan of partition indexes includes filter condition if it 
is guaranteed by check constraint that all records of this partition 
match search predicate?



2. Partial indexes:

create table t (k integer primary key, v integer);
insert into t values (generate_series(1,2),random());
create index i1 on t(v) where k between 1 and 1;
create index i2 on t(v) where k between 10001 and 2;
postgres=# explain select * from t where k between 1 and 1 and v 
= 100;

 QUERY PLAN

 Index Scan using i1 on t  (cost=0.29..7.28 rows=1 width=8)
   Index Cond: (v = 100)
(2 rows)


Here we get perfect plan. Let's try to extend search interval:


postgres=# explain select * from t where k between 1 and 2 and v 
= 100;

QUERY PLAN
--
 Index Scan using t_pkey on t  (cost=0.29..760.43 rows=1 width=8)
   Index Cond: ((k >= 1) AND (k <= 2))
   Filter: (v = 100)
(3 rows)

Unfortunately in this case Postgres is not able to apply partial 
indexes.

And this is what I expected to get:

postgres=# explain select * from t where k between 1 and 1 and v 
= 100 union all select * from t where k between 10001 and 2 and v 
= 100;

  QUERY PLAN
--
 Append  (cost=0.29..14.58 rows=2 width=8)
   ->  Index Scan using i1 on t  (cost=0.29..7.28 rows=1 width=8)
 Index Cond: (v = 100)
   ->  Index Scan using i2 on t t_1  (cost=0.29..7.28 rows=1 width=8)
 Index Cond: (v = 100)


I wonder if there are some principle problems in supporting this two 
things in optimizer:
1. Remove search condition for primary key if it is fully satisfied 
by derived table check constraint.
2. Append index scans of several partial indexes if specified 
interval is covered by their conditions.


I wonder if someone is familiar with this part of optimizer ad can 
easily fix it.
Otherwise I am going to spend some time on solving this problems (if 
community think that such optimizations will be useful).




Replying to myself: the following small patch removes redundant checks 
from index scans for derived tables:



diff --git a/src/backend/optimizer/util/plancat.c 
b/src/backend/optimizer/util/plancat.c

index 939045d..1f7c9cf 100644
--- a/src/backend/optimizer/util/plancat.c
+++ b/src/backend/optimizer/util/plancat.c
@@ -1441,6 +1441,20 @@ relation_excluded_by_constraints(PlannerInfo 
*root,
if (predicate_refuted_by(safe_constraints, 
rel->baserestrictinfo, 

Re: [HACKERS] Secondary index access optimizations

2017-08-14 Thread Konstantin Knizhnik



On 14.08.2017 12:37, Konstantin Knizhnik wrote:

Hi hackers,

I am trying to compare different ways of optimizing work with huge 
append-only tables in PostgreSQL where primary key is something like 
timestamp and queries are usually accessing most recent data using 
some secondary keys. Size of secondary index is one of the most 
critical factors limiting  insert/search performance. As far as data 
is inserted in timestamp ascending order, access to primary key is 
well localized and accessed tables are present in memory. But if we 
create secondary key for the whole table, then access to it will 
require random reads from the disk and significantly decrease 
performance.


There are two well known solutions of the problem:
1. Table partitioning
2. Partial indexes

This approaches I want to compare. First of all I want to check if 
optimizer is able to generate efficient query execution plan covering 
different time intervals.

Unfortunately in both cases generated plan is not optimal.

1. Table partitioning:

create table base (k integer primary key, v integer);
create table part1 (check (k between 1 and 1)) inherits (base);
create table part2 (check (k between 10001 and 2)) inherits (base);
create index pi1 on part1(v);
create index pi2 on part2(v);
insert int part1 values (generate series(1,1), random());
insert into part2 values (generate_series(10001,2), random());
explain select * from base where k between 1 and 2 and v = 100;
  QUERY PLAN
---
 Append  (cost=0.00..15.65 rows=3 width=8)
   ->  Seq Scan on base  (cost=0.00..0.00 rows=1 width=8)
 Filter: ((k >= 1) AND (k <= 2) AND (v = 100))
   ->  Index Scan using pi1 on part1  (cost=0.29..8.31 rows=1 width=8)
 Index Cond: (v = 100)
 Filter: ((k >= 1) AND (k <= 2))
   ->  Index Scan using pi2 on part2  (cost=0.29..7.34 rows=1 width=8)
 Index Cond: (v = 100)
 Filter: ((k >= 1) AND (k <= 2))

Questions:
- Is there some way to avoid sequential scan of parent table? Yes, it 
is empty and so sequential scan will not take much time, but ... it 
still requires some additional actions and so increasing query 
execution time.
- Why index scan of partition indexes includes filter condition if it 
is guaranteed by check constraint that all records of this partition 
match search predicate?



2. Partial indexes:

create table t (k integer primary key, v integer);
insert into t values (generate_series(1,2),random());
create index i1 on t(v) where k between 1 and 1;
create index i2 on t(v) where k between 10001 and 2;
postgres=# explain select * from t where k between 1 and 1 and v = 
100;

 QUERY PLAN

 Index Scan using i1 on t  (cost=0.29..7.28 rows=1 width=8)
   Index Cond: (v = 100)
(2 rows)


Here we get perfect plan. Let's try to extend search interval:


postgres=# explain select * from t where k between 1 and 2 and v = 
100;

QUERY PLAN
--
 Index Scan using t_pkey on t  (cost=0.29..760.43 rows=1 width=8)
   Index Cond: ((k >= 1) AND (k <= 2))
   Filter: (v = 100)
(3 rows)

Unfortunately in this case Postgres is not able to apply partial indexes.
And this is what I expected to get:

postgres=# explain select * from t where k between 1 and 1 and v = 
100 union all select * from t where k between 10001 and 2 and v = 
100;

  QUERY PLAN
--
 Append  (cost=0.29..14.58 rows=2 width=8)
   ->  Index Scan using i1 on t  (cost=0.29..7.28 rows=1 width=8)
 Index Cond: (v = 100)
   ->  Index Scan using i2 on t t_1  (cost=0.29..7.28 rows=1 width=8)
 Index Cond: (v = 100)


I wonder if there are some principle problems in supporting this two 
things in optimizer:
1. Remove search condition for primary key if it is fully satisfied by 
derived table check constraint.
2. Append index scans of several partial indexes if specified interval 
is covered by their conditions.


I wonder if someone is familiar with this part of optimizer ad can 
easily fix it.
Otherwise I am going to spend some time on solving this problems (if 
community think that such optimizations will be useful).




Replying to myself: the following small patch removes redundant checks 
from index scans for derived tables:



diff --git a/src/backend/optimizer/util/plancat.c 
b/src/backend/optimizer/util/plancat.c

index 939045d..1f7c9cf 100644
--- a/src/backend/optimizer/util/plancat.c
+++ b/src/backend/optimizer/util/plancat.c
@@ -1441,6 +1441,20 @@ relation_excluded_by_constraints(PlannerInfo *root,
if (predicate_refuted_by(safe_constraints, 
rel->baserestrictinfo, false))

return true;

+   /*
+