Re: Use extended statistics to estimate (Var op Var) clauses

2022-10-12 Thread Michael Paquier
On Fri, Jul 22, 2022 at 02:17:47PM +0100, Dean Rasheed wrote:
> It's a worry that none of the existing regression tests picked up on
> that. Perhaps a similar test could be added using the existing test
> data. Otherwise, I think it'd be worth adding a new test with similar
> data to the above.

This feedback has not been answered for two months, so marked this
entry as RwF for now.
--
Michael


signature.asc
Description: PGP signature


Re: Use extended statistics to estimate (Var op Var) clauses

2022-07-22 Thread Dean Rasheed
On Thu, 21 Jul 2022 at 12:42, Tomas Vondra
 wrote:
>
> > This needs to account for nullfrac, since x = x is only true if x is not 
> > null.
>
> Right, I forgot to account for nullfrac.
>

Ditto variable <= variable

> > I don't like how matching_restriction_variables() is adding a
> > non-trivial amount of overhead to each of these selectivity functions,
> > by calling examine_variable() for each side, duplicating what
> > get_restriction_variable() does later on.
>
> But matching_restriction_variables() does not use examine_variable()
> anymore. It did, originally, but 0002 removed all of that. Now it does
> just pull_varnos() and that's it. I admit leaving those bits unsquashed
> was a bit confusing, but the first part was really 0001+0002+0003.
>

Ah, right, I only looked at 0001, because I thought that was the main
part that I hadn't previously looked at.

So my previous concern with matching_restriction_variables() was how
many extra cycles it was adding to test for what should be a pretty
uncommon case. Looking at the new version, I think it's a lot better,
but perhaps it would be more efficient to call equal() as soon as
you've extracted "left" and "right", so it can bail out earlier and
faster when they're not equal. I think a call to equal() should be
very fast compared to pull_varnos() and contain_volatile_functions().

> Attached is a rebased and somewhat cleaned up version of the patch
> series, addressing the review comments so far and squashing the bits I
> previously kept separate to showcase the changes.
>
> I've also added a bunch of regression tests - queries with (Var op Var)
> clauses of varying complexity, to demonstrate the effect of each patch.
> I added them as 0001, so it's clear how the individual patches affect
> the results.
>

Cool. That's much clearer, and the results look quite good.

The main thing that jumps out at me now is the whole "issimple"
processing stuff. Those changes turned out to be a lot more invasive
than I thought. I don't think this part is correct:

/*
 * All AND/OR clauses are considered complex, even if all arguments are
 * simple clauses. For NOT clauses we need to check the argument and then
 * we can update the flag.
 *
 * XXX Maybe for AND/OR we should check if all arguments reference the
 * same attnum, and consider them complex only when there are multiple
 * attnum values (i.e. different Vars)?
 */

I think the XXX part of that comment is right, and that's what the
original code did.

I had to go remind myself what "simple" was intended for, so apologies
if this is really pedestrian:

The basic idea of a "simple" clause was meant to mean any clause
that's likely to be better estimated by standard statistics, rather
than extended statistics (and "issimple" only applies when such
clauses are combined using "OR"). So, for example, given a query with

  WHERE a = 1 OR (b > 0 AND b < 10)

both "a = 1" and "b > 0 AND b < 10" should be considered "simple",
since the standard statistics code is likely to estimate them quite
well. For the second clause, it might use histograms and the
RangeQueryClause-machinery, which ought to work well, whereas applying
a multivariate MCV list to "b > 0 AND b < 10"  in isolation would
probably make its estimate worse.

So then, in this example, what the "OR" handling in
statext_mcv_clauselist_selectivity() would end up doing is:

  P(a = 1 OR (b > 0 AND b < 10))
= P(a = 1)
+ P(b > 0 AND b < 10)
- P(a = 1 AND b > 0 AND b < 10)   # Overlap

and only use extended statistics for the overlap term, since the other
2 clauses are "simple", and best estimated without extended stats. The
patch changes that, which ends up making things worse in some cases.

Is it the case that the only reason for changing the "issimple"
handling was because the standard statistics code didn't work well for
things like "a < a", and so we wanted to treat that as not-simple? If
so, given the selfuncs improvements, perhaps that's no longer
necessary, and the original definition of "simple" is OK. IOW, is 0004
necessary, given 0002?.

I notice that 0004 didn't change any test results, so if there are
cases where it improves things, they aren't tested.

Here's a simple example using the WHERE clause above. Without the
patch, extended stats improved the estimate, but with the patch they
don't anymore:

DROP TABLE IF EXISTS foo;
CREATE TABLE foo (a int, b int);
INSERT INTO foo SELECT x/10+1, x FROM generate_series(1,1) g(x);
ANALYSE foo;
EXPLAIN ANALYSE SELECT * FROM foo WHERE a = 1 OR (b > 0 AND b < 10);

Estimated: 18
Actual:9

CREATE STATISTICS foo_s (mcv) ON a,b FROM foo;
ANALYSE foo;
EXPLAIN ANALYSE SELECT * FROM foo WHERE a = 1 OR (b > 0 AND b < 10);

Estimated: 9 without the patch, 18 with the patch
Actual:9

It's a worry that none of the existing regression tests picked up on
that. Perhaps a similar test could be added using the existing test
data. Otherwise, I think it'd be worth adding a 

Re: Use extended statistics to estimate (Var op Var) clauses

2022-07-09 Thread Dean Rasheed
On Mon, 13 Dec 2021 at 02:21, Tomas Vondra
 wrote:
>
> Hi,
>
> I finally got around to this patch again, focusing mostly on the first
> part that simply returns either 1.0 or 0.0 for Var op Var conditions
> (i.e. the part not really using extended statistics).
>

Just starting to look at this again, starting with 0001 ...

This needs to account for nullfrac, since x = x is only true if x is not null.

I don't like how matching_restriction_variables() is adding a
non-trivial amount of overhead to each of these selectivity functions,
by calling examine_variable() for each side, duplicating what
get_restriction_variable() does later on.

I think it would probably be better to make the changes in
var_eq_non_const(), where all of this information is available, and
get rid of matching_restriction_variables() (it doesn't look like the
new scalarineqsel_wrapper() code really needs
matching_restriction_variables() - it can just use what
get_restriction_variable() already returns).

Regards,
Dean




Re: Use extended statistics to estimate (Var op Var) clauses

2021-12-21 Thread Mark Dilger



> On Dec 21, 2021, at 4:28 PM, Mark Dilger  wrote:
> 
> Maybe there is some reason this is ok.

... and there is.  Sorry for the noise.  The planner appears to be smart enough 
to know that column "salary" is not being changed, and therefore NEW.salary and 
OLD.salary are equal.  If I test a different update statement that contains a 
new value for "salary", the added assertion is not triggered.

(I didn't quite realize what the clause's varnosyn field was telling me until 
after I hit "send".)


—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company







Re: Use extended statistics to estimate (Var op Var) clauses

2021-12-21 Thread Mark Dilger



> On Dec 12, 2021, at 6:21 PM, Tomas Vondra  
> wrote:
> 
> <0001-Improve-estimates-for-Var-op-Var-with-the-same-Var.patch>

+* It it's (variable = variable) with the same variable on both sides, it's

s/It it's/If it's/

0001 lacks regression coverage.

> <0002-simplification.patch>

Changing comments introduced by patch 0001 in patch 0002 just creates git churn:

-* estimate the selectivity as 1.0 (or 0.0 if it's negated).
+* estimate the selectivity as 1.0 (or 0.0 when it's negated).

and:

  * matching_restriction_variable
- * Examine the args of a restriction clause to see if it's of the
- * form (variable op variable) with the same variable on both sides.
+ * Check if the two arguments of a restriction clause refer to the same
+ * variable, i.e. if the condition is of the form (variable op variable).
+ * We can deduce selectivity for such (in)equality clauses.

0002 also lacks regression coverage.

> <0003-relax-the-restrictions.patch>

0003 also lacks regression coverage.

> <0004-main-patch.patch>

Ok.

> <0005-Don-t-treat-Var-op-Var-as-simple-clauses.patch>

0005 again lacks regression coverage.



There might be a problem in how selectivity thinks about comparison between 
identical columns from the NEW and OLD pseudotables.  To show this, add an 
Assert to see where matching_restriction_variables() might return true:

--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -4952,6 +4952,8 @@ matching_restriction_variables(PlannerInfo *root, List 
*args, int varRelid)
return false;
 
/* The two variables need to match */
+   Assert(!equal(left, right));
+
return equal(left, right);

This results in the regression tests failing on "update rtest_emp set ename = 
'wiecx' where ename = 'wiecc';".  It may seem counterintuitive that 
matching_restriction_variables() would return true for a where-clause with only 
one occurrence of variable "ename", until you read the rule defined in 
rules.sql:

  create rule rtest_emp_upd as on update to rtest_emp where new.salary != 
old.salary do
  insert into rtest_emplog values (new.ename, current_user,
  'honored', new.salary, old.salary);

I think what's really happening here is that "new.salary != old.salary" is 
being processed by matching_restriction_variables() and returning that the 
new.salary refers to the same thing that old.salary refers to.

Here is the full stack trace, for reference:

(lldb) bt
* thread #1, stop reason = signal SIGSTOP
frame #0: 0x7fff6d8fd33a libsystem_kernel.dylib`__pthread_kill + 10
frame #1: 0x7fff6d9b9e60 libsystem_pthread.dylib`pthread_kill + 430
frame #2: 0x7fff6d884808 libsystem_c.dylib`abort + 120
frame #3: 0x0001048a6c31 
postgres`ExceptionalCondition(conditionName="!equal(left, right)", 
errorType="FailedAssertion", fileName="selfuncs.c", lineNumber=4955) at 
assert.c:69:2
frame #4: 0x00010481e733 
postgres`matching_restriction_variables(root=0x7fe65e02b2d0, 
args=0x7fe65e02bf38, varRelid=0) at selfuncs.c:4955:2
frame #5: 0x00010480f63c 
postgres`eqsel_internal(fcinfo=0x7ffeebb9aeb8, negate=true) at 
selfuncs.c:265:6
frame #6: 0x00010481040a postgres`neqsel(fcinfo=0x7ffeebb9aeb8) at 
selfuncs.c:565:2
frame #7: 0x0001048b420c 
postgres`FunctionCall4Coll(flinfo=0x7ffeebb9af38, collation=0, 
arg1=140627396440784, arg2=901, arg3=140627396443960, arg4=0) at fmgr.c:1212:11
frame #8: 0x0001048b50e6 postgres`OidFunctionCall4Coll(functionId=102, 
collation=0, arg1=140627396440784, arg2=901, arg3=140627396443960, arg4=0) at 
fmgr.c:1448:9
  * frame #9: 0x000104557e4d 
postgres`restriction_selectivity(root=0x7fe65e02b2d0, operatorid=901, 
args=0x7fe65e02bf38, inputcollid=0, varRelid=0) at plancat.c:1828:26
frame #10: 0x0001044d4c76 
postgres`clause_selectivity_ext(root=0x7fe65e02b2d0, 
clause=0x7fe65e02bfe8, varRelid=0, jointype=JOIN_INNER, 
sjinfo=0x, use_extended_stats=true) at clausesel.c:902:10
frame #11: 0x0001044d4186 
postgres`clauselist_selectivity_ext(root=0x7fe65e02b2d0, 
clauses=0x7fe65e02cdd0, varRelid=0, jointype=JOIN_INNER, 
sjinfo=0x, use_extended_stats=true) at clausesel.c:185:8
frame #12: 0x0001044d3f97 
postgres`clauselist_selectivity(root=0x7fe65e02b2d0, 
clauses=0x7fe65e02cdd0, varRelid=0, jointype=JOIN_INNER, 
sjinfo=0x) at clausesel.c:108:9
frame #13: 0x0001044de192 
postgres`set_baserel_size_estimates(root=0x7fe65e02b2d0, 
rel=0x7fe65e01e640) at costsize.c:4931:3
frame #14: 0x0001044d16be 
postgres`set_plain_rel_size(root=0x7fe65e02b2d0, rel=0x7fe65e01e640, 
rte=0x7fe65e02ac40) at allpaths.c:584:2
frame #15: 0x0001044d0a79 
postgres`set_rel_size(root=0x7fe65e02b2d0, rel=0x7fe65e01e640, rti=1, 
rte=0x7fe65e02ac40) at allpaths.c:413:6
frame #16: 

Re: Use extended statistics to estimate (Var op Var) clauses

2021-12-12 Thread Zhihong Yu
On Sun, Dec 12, 2021 at 6:04 PM Tomas Vondra 
wrote:

> On 8/31/21 00:14, Zhihong Yu wrote:
> > Hi,
> > For patch 0002,
> >
> > +   s1 = statext_clauselist_selectivity(root, clauses,
> > varRelid,
> > +   jointype,
> > sjinfo, rel,
> > +
> > , false);
> > +
> > +   estimated = (bms_num_members(estimatedclauses) == 1);
> >
> > I took a look at clauselist_apply_dependencies() (called by
> > statext_clauselist_selectivity) where estimatedclauses is modified.
> > Since the caller would not use the returned Selectivity if number of
> > elements in estimatedclauses is greater than 1, I wonder
> > if a parameter can be added to clauselist_apply_dependencies() which
> > indicates early return if the second element is added
> to estimatedclauses.
> >
>
> Hmmm, I'm not sure I understand your point. Are you suggesting there's a
> bug in not updating the bitmap, or would this be an optimization? Can
> you give an example of a query affected by this?
>
> Hi,
My previous comment was from 3 months ago - let me see if I can come up
with an example.

Cheers

>
> regards
>
> --
> Tomas Vondra
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: Use extended statistics to estimate (Var op Var) clauses

2021-12-12 Thread Tomas Vondra

Hi,

I finally got around to this patch again, focusing mostly on the first 
part that simply returns either 1.0 or 0.0 for Var op Var conditions 
(i.e. the part not really using extended statistics).


I have been unhappy about using examine_variable, which does various 
expensive things like searching for statistics (which only got worse 
because now we're also looking for expression stats). But we don't 
really need the stats - we just need to check the Vars match (same 
relation, same attribute). So 0002 fixes this.


Which got me thinking that maybe we don't need to restrict this to Var 
nodes only. We can just as easily compare arbitrary expressions, 
provided it's for the same relation and there are no volatile functions. 
So 0003 does this. Conditions with the same complex expression on each 
side of an operator are probably fairly rare, but it's cheap so why not.


0004 and 0005 parts are unchanged.


The next steps is adding some tests to the first parts, and extending 
the tests in the main patch (to also use more complex expressions, if 
0003 gets included).



regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL CompanyFrom 91371c3e4252580a30530d5b625f72d5525c5c73 Mon Sep 17 00:00:00 2001
From: Tomas Vondra 
Date: Thu, 26 Aug 2021 23:01:04 +0200
Subject: [PATCH 1/5] Improve estimates for Var op Var with the same Var

When estimating (Var op Var) conditions, we can treat the case with the
same Var on both sides as a special case, and we can provide better
selectivity estimate than for the generic case.

For example for (a = a) we know it's 1.0, because all rows are expected
to match. Similarly for (a != a) , wich has selectivity 0.0. And the
same logic can be applied to inequality comparisons, like (a < a) etc.

In principle, those clauses are a bit strange and queries are unlikely
to use them. But query generators sometimes do silly things, and these
checks are quite cheap so it's likely a win.
---
 src/backend/utils/adt/selfuncs.c | 77 +++-
 1 file changed, 76 insertions(+), 1 deletion(-)

diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index 10895fb287..59038895ec 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -210,7 +210,8 @@ static bool get_actual_variable_endpoint(Relation heapRel,
 		 MemoryContext outercontext,
 		 Datum *endpointDatum);
 static RelOptInfo *find_join_input_rel(PlannerInfo *root, Relids relids);
-
+static bool matching_restriction_variables(PlannerInfo *root, List *args,
+		   int varRelid);
 
 /*
  *		eqsel			- Selectivity of "=" for any data types.
@@ -256,6 +257,14 @@ eqsel_internal(PG_FUNCTION_ARGS, bool negate)
 		}
 	}
 
+	/*
+	 * It it's (variable = variable) with the same variable on both sides, it's
+	 * a special case and we know it's not expected to filter anything, so we
+	 * estimate the selectivity as 1.0 (or 0.0 if it's negated).
+	 */
+	if (matching_restriction_variables(root, args, varRelid))
+		return (negate) ? 0.0 : 1.0;
+
 	/*
 	 * If expression is not variable = something or something = variable, then
 	 * punt and return a default estimate.
@@ -1408,6 +1417,22 @@ scalarineqsel_wrapper(PG_FUNCTION_ARGS, bool isgt, bool iseq)
 	Oid			consttype;
 	double		selec;
 
+	/*
+	 * Handle (variable < variable) and (variable <= variable) with the same
+	 * variable on both sides as a special case. The strict inequality should
+	 * not match any rows, hence selectivity is 0.0. The other case is about
+	 * the same as equality, so selectivity is 1.0.
+	 */
+	if (matching_restriction_variables(root, args, varRelid))
+	{
+		/* The case with equality matches all rows, so estimate it as 1.0. */
+		if (iseq)
+			PG_RETURN_FLOAT8(1.0);
+
+		/* Strict inequality matches nothing, so selectivity is 0.0. */
+		PG_RETURN_FLOAT8(0.0);
+	}
+
 	/*
 	 * If expression is not variable op something or something op variable,
 	 * then punt and return a default estimate.
@@ -4871,6 +4896,56 @@ get_restriction_variable(PlannerInfo *root, List *args, int varRelid,
 	return false;
 }
 
+
+/*
+ * matching_restriction_variable
+ *		Examine the args of a restriction clause to see if it's of the
+ *		form (variable op variable) with the same variable on both sides.
+ *
+ * Inputs:
+ *	root: the planner info
+ *	args: clause argument list
+ *	varRelid: see specs for restriction selectivity functions
+ *
+ * Returns true if the same variable is on both sides, otherwise false.
+ */
+static bool
+matching_restriction_variables(PlannerInfo *root, List *args, int varRelid)
+{
+	Node	   *left,
+			   *right;
+	VariableStatData ldata;
+	VariableStatData rdata;
+	bool		res = false;
+
+	/* Fail if not a binary opclause (probably shouldn't happen) */
+	if (list_length(args) != 2)
+		return false;
+
+	left = (Node *) linitial(args);
+	right = (Node *) lsecond(args);
+
+	/*
+	 * Examine both sides.  Note that when varRelid is nonzero, 

Re: Use extended statistics to estimate (Var op Var) clauses

2021-12-12 Thread Tomas Vondra

On 8/31/21 00:14, Zhihong Yu wrote:

Hi,
For patch 0002,

+                   s1 = statext_clauselist_selectivity(root, clauses, 
varRelid,
+                                                       jointype, 
sjinfo, rel,
+   
, false);

+
+                   estimated = (bms_num_members(estimatedclauses) == 1);

I took a look at clauselist_apply_dependencies() (called by 
statext_clauselist_selectivity) where estimatedclauses is modified.
Since the caller would not use the returned Selectivity if number of 
elements in estimatedclauses is greater than 1, I wonder
if a parameter can be added to clauselist_apply_dependencies() which 
indicates early return if the second element is added to estimatedclauses.




Hmmm, I'm not sure I understand your point. Are you suggesting there's a 
bug in not updating the bitmap, or would this be an optimization? Can 
you give an example of a query affected by this?



regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-30 Thread Zhihong Yu
On Mon, Aug 30, 2021 at 9:00 AM Tomas Vondra 
wrote:

> On 8/28/21 6:30 PM, Mark Dilger wrote:
> >
> >
> >> On Aug 28, 2021, at 6:52 AM, Tomas Vondra
> >>  wrote:
> >>
> >> Part 0003 fixes handling of those clauses so that we don't treat
> >> them as simple, but it does that by tweaking
> >> statext_is_compatible_clause(), as suggested by Dean.
> >
> > Function examine_opclause_args() doesn't set issimple to anything in
> > the IsA(rightop, Const) case, but assigns *issimplep = issimple at
> > the bottom.  The compiler is not complaining about using a possibly
> > uninitialized variable, but if I change the "return true" on the very
> > next line to "return issimple", the compiler complains quite loudly.
> >
>
> Yeah, true. Thanks for noticing this was a bug - I forgot to set the
> issimple variable in the first branch.
>
> >
> > Some functions define bool *issimple, others bool *issimplep and bool
> > issimple.  You might want to standardize the naming.
> >
>
> I think the naming is standard with respect to the surrounding code. If
> the other parameters use "p" to mark "pointer" then issimplep is used,
> but in other places it's just "issimple". IMHO this is appropriate.
>
> > It's difficult to know what "simple" means in extended_stats.c.
> > There is no file-global comment explaining the concept, and functions
> > like compare_scalars_simple don't have correlates named
> > compare_scalars_complex or such, so the reader cannot infer by
> > comparison what the difference might be between a "simple" case and
> > some non-"simple" case.  The functions' issimple (or issimplep)
> > argument are undocumented.
> >
> > There is a comment:
> >
> > /* * statext_mcv_clauselist_selectivity *  Estimate clauses using
> > the best multi-column statistics.  * * - simple selectivity:
> > Computed without extended statistics, i.e. as if the *
> > columns/clauses were independent. *  */
> >
> > but it takes a while to find if you search for "issimple".
> >
>
> Yeah, true. This was added a while ago when Dean reworked the estimation
> (based on MCV), and it seemed clear back then. But now a comment
> explaining this concept (and how it affects the estimation) would be
> helpful. I'll try digging in the archives for the details.
>
> >
> > In both scalarineqsel_wrapper() and eqsel_internal(), the call to
> > matching_restriction_variables() should usually return false, since
> > comparing a variable to itself is an unusual case.  The next call is
> > to get_restriction_variable(), which repeats the work of examining
> > the left and right variables.  So in almost all cases, after throwing
> > away the results of:
> >
> > examine_variable(root, left, varRelid, );
> > examine_variable(root, right, varRelid, );
> >
> > performed in matching_restriction_variables(), we'll do exactly the
> > same work again (with one variable named differently) in
> > get_restriction_variable():
> >
> > examine_variable(root, left, varRelid, vardata);
> > examine_variable(root, right, varRelid, );
> >
> > That'd be fine if example_variable() were a cheap function, but it
> > appears not to be.  Do you think you could save the results rather
> > than recomputing them?  It's a little messy, since these are the only
> > two functions out of about ten which follow this pattern, so you'd
> > have to pass NULLs into get_restriction_variable() from the other
> > eight callers, but it still looks like that would be a win.
> >
>
> I had similar concerns, although I don't think those functions are very
> expensive compared to the rest of the estimation code. I haven't done
> any measurements yet, though.
>
> But I don't think saving the results is the way to go - in a way, we
> already store the stats (which seems like the most expensive bit) in
> syscache. It seems better to just simplify examine_variable() so that it
> does not lookup the statistics, which we don't need here at all.
>
>
> The attached version of the patches fixes the other bugs reported here
> so far - most importantly it reworks how we set issimple while examining
> the clauses, so that it's never skips the initialization. Hopefully the
> added comments also explain it a bit more clearly.
>
>
> regards
>
> --
> Tomas Vondra
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
Hi,
For patch 0002,

+   s1 = statext_clauselist_selectivity(root, clauses,
varRelid,
+   jointype, sjinfo,
rel,
+   ,
false);
+
+   estimated = (bms_num_members(estimatedclauses) == 1);

I took a look at clauselist_apply_dependencies() (called by
statext_clauselist_selectivity) where estimatedclauses is modified.
Since the caller would not use the returned Selectivity if number of
elements in estimatedclauses is greater than 1, I wonder
if a parameter can be added to clauselist_apply_dependencies() which
indicates early return if the 

Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-30 Thread Tomas Vondra

On 8/28/21 6:30 PM, Mark Dilger wrote:




On Aug 28, 2021, at 6:52 AM, Tomas Vondra
 wrote:

Part 0003 fixes handling of those clauses so that we don't treat
them as simple, but it does that by tweaking
statext_is_compatible_clause(), as suggested by Dean.


Function examine_opclause_args() doesn't set issimple to anything in
the IsA(rightop, Const) case, but assigns *issimplep = issimple at
the bottom.  The compiler is not complaining about using a possibly
uninitialized variable, but if I change the "return true" on the very
next line to "return issimple", the compiler complains quite loudly.



Yeah, true. Thanks for noticing this was a bug - I forgot to set the 
issimple variable in the first branch.




Some functions define bool *issimple, others bool *issimplep and bool
issimple.  You might want to standardize the naming.



I think the naming is standard with respect to the surrounding code. If 
the other parameters use "p" to mark "pointer" then issimplep is used, 
but in other places it's just "issimple". IMHO this is appropriate.



It's difficult to know what "simple" means in extended_stats.c.
There is no file-global comment explaining the concept, and functions
like compare_scalars_simple don't have correlates named
compare_scalars_complex or such, so the reader cannot infer by
comparison what the difference might be between a "simple" case and
some non-"simple" case.  The functions' issimple (or issimplep)
argument are undocumented.

There is a comment:

/* * statext_mcv_clauselist_selectivity *  Estimate clauses using
the best multi-column statistics.  * * - simple selectivity:
Computed without extended statistics, i.e. as if the *
columns/clauses were independent. *  */

but it takes a while to find if you search for "issimple".



Yeah, true. This was added a while ago when Dean reworked the estimation 
(based on MCV), and it seemed clear back then. But now a comment 
explaining this concept (and how it affects the estimation) would be 
helpful. I'll try digging in the archives for the details.




In both scalarineqsel_wrapper() and eqsel_internal(), the call to
matching_restriction_variables() should usually return false, since
comparing a variable to itself is an unusual case.  The next call is
to get_restriction_variable(), which repeats the work of examining
the left and right variables.  So in almost all cases, after throwing
away the results of:

examine_variable(root, left, varRelid, ); 
examine_variable(root, right, varRelid, );


performed in matching_restriction_variables(), we'll do exactly the
same work again (with one variable named differently) in
get_restriction_variable():

examine_variable(root, left, varRelid, vardata); 
examine_variable(root, right, varRelid, );


That'd be fine if example_variable() were a cheap function, but it
appears not to be.  Do you think you could save the results rather
than recomputing them?  It's a little messy, since these are the only
two functions out of about ten which follow this pattern, so you'd
have to pass NULLs into get_restriction_variable() from the other
eight callers, but it still looks like that would be a win.



I had similar concerns, although I don't think those functions are very 
expensive compared to the rest of the estimation code. I haven't done 
any measurements yet, though.


But I don't think saving the results is the way to go - in a way, we 
already store the stats (which seems like the most expensive bit) in 
syscache. It seems better to just simplify examine_variable() so that it 
does not lookup the statistics, which we don't need here at all.



The attached version of the patches fixes the other bugs reported here 
so far - most importantly it reworks how we set issimple while examining 
the clauses, so that it's never skips the initialization. Hopefully the 
added comments also explain it a bit more clearly.



regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
>From 71837f82db88baa9e0250d3868078140cf11f5a7 Mon Sep 17 00:00:00 2001
From: Tomas Vondra 
Date: Thu, 26 Aug 2021 23:01:04 +0200
Subject: [PATCH 1/3] Improve estimates for Var op Var with the same Var

When estimating (Var op Var) conditions, we can treat the case with the
same Var on both sides as a special case, and we can provide better
selectivity estimate than for the generic case.

For example for (a = a) we know it's 1.0, because all rows are expected
to match. Similarly for (a != a) , wich has selectivity 0.0. And the
same logic can be applied to inequality comparisons, like (a < a) etc.

In principle, those clauses are a bit strange and queries are unlikely
to use them. But query generators sometimes do silly things, and these
checks are quite cheap so it's likely a win.
---
 src/backend/utils/adt/selfuncs.c | 77 +++-
 1 file changed, 76 insertions(+), 1 deletion(-)

diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c

Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-28 Thread Mark Dilger



> On Aug 28, 2021, at 10:18 AM, Zhihong Yu  wrote:
> 
> I wonder if the queries with (a=a) or (a complexity to address.
> Has anyone seen such clause in production queries ?

You might expect clauses like WHERE FALSE to be unusual, but that phrase gets 
added quite a lot by query generators.  Somebody could add "WHERE a != a" in a 
misguided attempt to achieve the same thing.

I wouldn't be terribly surprised if query generators might generate queries 
with a variable number of tables joined together with comparisons between the 
joined tables, and in the degenerate case of only one table end up with a table 
column compared against itself.

You could argue that those people need to fix their queries/generators to not 
do this sort of thing, but the end user affected by such queries may have 
little ability to fix it.

—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company







Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-28 Thread Zhihong Yu
On Sat, Aug 28, 2021 at 9:30 AM Mark Dilger 
wrote:

>
>
> > On Aug 28, 2021, at 6:52 AM, Tomas Vondra 
> wrote:
> >
> > Part 0003 fixes handling of those clauses so that we don't treat them as
> simple, but it does that by tweaking statext_is_compatible_clause(), as
> suggested by Dean.
>
> Function examine_opclause_args() doesn't set issimple to anything in the
> IsA(rightop, Const) case, but assigns *issimplep = issimple at the bottom.
> The compiler is not complaining about using a possibly uninitialized
> variable, but if I change the "return true" on the very next line to
> "return issimple", the compiler complains quite loudly.
>
>
> Some functions define bool *issimple, others bool *issimplep and bool
> issimple.  You might want to standardize the naming.
>
> It's difficult to know what "simple" means in extended_stats.c.  There is
> no file-global comment explaining the concept, and functions like
> compare_scalars_simple don't have correlates named compare_scalars_complex
> or such, so the reader cannot infer by comparison what the difference might
> be between a "simple" case and some non-"simple" case.  The functions'
> issimple (or issimplep) argument are undocumented.
>
> There is a comment:
>
> /*
>  * statext_mcv_clauselist_selectivity
>  *  Estimate clauses using the best multi-column statistics.
>   
>  *
>  * - simple selectivity:  Computed without extended statistics, i.e. as if
> the
>  * columns/clauses were independent.
>  *
> 
>  */
>
> but it takes a while to find if you search for "issimple".
>
>
> In both scalarineqsel_wrapper() and eqsel_internal(), the call to
> matching_restriction_variables() should usually return false, since
> comparing a variable to itself is an unusual case.  The next call is to
> get_restriction_variable(), which repeats the work of examining the left
> and right variables.  So in almost all cases, after throwing away the
> results of:
>
> examine_variable(root, left, varRelid, );
> examine_variable(root, right, varRelid, );
>
> performed in matching_restriction_variables(), we'll do exactly the same
> work again (with one variable named differently) in
> get_restriction_variable():
>
> examine_variable(root, left, varRelid, vardata);
> examine_variable(root, right, varRelid, );
>
> That'd be fine if example_variable() were a cheap function, but it appears
> not to be.  Do you think you could save the results rather than recomputing
> them?  It's a little messy, since these are the only two functions out of
> about ten which follow this pattern, so you'd have to pass NULLs into
> get_restriction_variable() from the other eight callers, but it still looks
> like that would be a win.
>
> —
> Mark Dilger
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
> Hi,
I wonder if the queries with (a=a) or (a

Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-28 Thread Mark Dilger



> On Aug 28, 2021, at 6:52 AM, Tomas Vondra  
> wrote:
> 
> Part 0003 fixes handling of those clauses so that we don't treat them as 
> simple, but it does that by tweaking statext_is_compatible_clause(), as 
> suggested by Dean.

Function examine_opclause_args() doesn't set issimple to anything in the 
IsA(rightop, Const) case, but assigns *issimplep = issimple at the bottom.  The 
compiler is not complaining about using a possibly uninitialized variable, but 
if I change the "return true" on the very next line to "return issimple", the 
compiler complains quite loudly.


Some functions define bool *issimple, others bool *issimplep and bool issimple. 
 You might want to standardize the naming.

It's difficult to know what "simple" means in extended_stats.c.  There is no 
file-global comment explaining the concept, and functions like 
compare_scalars_simple don't have correlates named compare_scalars_complex or 
such, so the reader cannot infer by comparison what the difference might be 
between a "simple" case and some non-"simple" case.  The functions' issimple 
(or issimplep) argument are undocumented.

There is a comment:

/*
 * statext_mcv_clauselist_selectivity
 *  Estimate clauses using the best multi-column statistics.
  
 *
 * - simple selectivity:  Computed without extended statistics, i.e. as if the
 * columns/clauses were independent.
 *

 */

but it takes a while to find if you search for "issimple".


In both scalarineqsel_wrapper() and eqsel_internal(), the call to 
matching_restriction_variables() should usually return false, since comparing a 
variable to itself is an unusual case.  The next call is to 
get_restriction_variable(), which repeats the work of examining the left and 
right variables.  So in almost all cases, after throwing away the results of:

examine_variable(root, left, varRelid, );
examine_variable(root, right, varRelid, );

performed in matching_restriction_variables(), we'll do exactly the same work 
again (with one variable named differently) in get_restriction_variable():

examine_variable(root, left, varRelid, vardata);
examine_variable(root, right, varRelid, );

That'd be fine if example_variable() were a cheap function, but it appears not 
to be.  Do you think you could save the results rather than recomputing them?  
It's a little messy, since these are the only two functions out of about ten 
which follow this pattern, so you'd have to pass NULLs into 
get_restriction_variable() from the other eight callers, but it still looks 
like that would be a win. 

—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company







Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-28 Thread Zhihong Yu
On Sat, Aug 28, 2021 at 6:53 AM Tomas Vondra 
wrote:

> Hi,
>
> The attached patch series is modified to improve estimates for these
> special clauses (Var op Var with the same var on both sides) without
> extended statistics. This is done in 0001, and it seems fairly simple
> and cheap.
>
> The 0002 part is still the same patch as on 2021/07/20. Part 0003 fixes
> handling of those clauses so that we don't treat them as simple, but it
> does that by tweaking statext_is_compatible_clause(), as suggested by
> Dean. It does work, although it's a bit more invasive than simply
> checking the shape of clause in statext_mcv_clauselist_selectivity.
>
> I do have results for the randomly generated queries, and this does
> improve the situation a lot - pretty much all the queries with (a=a) or
> (a
> That being said, I'm still not sure if this is an issue in real-world
> applications, or whether we're solving something because of synthetic
> queries generated by the randomized generator. But the checks seem
> fairly cheap, so maybe it doesn't matter too much.
>
>
> regards
>
> --
> Tomas Vondra
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
Hi,
For 0001-Improve-estimates-for-Var-op-Var-with-the-s-20210828.patch :

+ * form (variable op variable) with the save variable on both sides.

typo: save -> same

Cheers


Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-28 Thread Tomas Vondra

Hi,

The attached patch series is modified to improve estimates for these 
special clauses (Var op Var with the same var on both sides) without 
extended statistics. This is done in 0001, and it seems fairly simple 
and cheap.


The 0002 part is still the same patch as on 2021/07/20. Part 0003 fixes 
handling of those clauses so that we don't treat them as simple, but it 
does that by tweaking statext_is_compatible_clause(), as suggested by 
Dean. It does work, although it's a bit more invasive than simply 
checking the shape of clause in statext_mcv_clauselist_selectivity.


I do have results for the randomly generated queries, and this does 
improve the situation a lot - pretty much all the queries with (a=a) or 
(a

That being said, I'm still not sure if this is an issue in real-world 
applications, or whether we're solving something because of synthetic 
queries generated by the randomized generator. But the checks seem 
fairly cheap, so maybe it doesn't matter too much.



regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
>From 708587e495483aff4b84487103a545b6e860d0c0 Mon Sep 17 00:00:00 2001
From: Tomas Vondra 
Date: Thu, 26 Aug 2021 23:01:04 +0200
Subject: [PATCH 1/3] Improve estimates for Var op Var with the same Var

When estimating (Var op Var) conditions, we can treat the case with the
same Var on both sides as a special case, and we can provide better
selectivity estimate than for the generic case.

For example for (a = a) we know it's 1.0, because all rows are expected
to match. Similarly for (a != a) , wich has selectivity 0.0. And the
same logic can be applied to inequality comparisons, like (a < a) etc.

In principle, those clauses are a bit strange and queries are unlikely
to use them. But query generators sometimes do silly things, and these
checks are quite cheap so it's likely a win.
---
 src/backend/utils/adt/selfuncs.c | 75 +++-
 1 file changed, 74 insertions(+), 1 deletion(-)

diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index 0c8c05f6c2..0baeaa040d 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -210,7 +210,8 @@ static bool get_actual_variable_endpoint(Relation heapRel,
 		 MemoryContext outercontext,
 		 Datum *endpointDatum);
 static RelOptInfo *find_join_input_rel(PlannerInfo *root, Relids relids);
-
+static bool matching_restriction_variables(PlannerInfo *root, List *args,
+		   int varRelid);
 
 /*
  *		eqsel			- Selectivity of "=" for any data types.
@@ -256,6 +257,14 @@ eqsel_internal(PG_FUNCTION_ARGS, bool negate)
 		}
 	}
 
+	/*
+	 * It it's (variable = variable) with the same variable on both sides, it's
+	 * a special case and we know it does not filter anything, which means the
+	 * selectivity is 1.0.
+	 */
+	if (matching_restriction_variables(root, args, varRelid))
+		return (negate) ? 0.0 : 1.0;
+
 	/*
 	 * If expression is not variable = something or something = variable, then
 	 * punt and return a default estimate.
@@ -1408,6 +1417,20 @@ scalarineqsel_wrapper(PG_FUNCTION_ARGS, bool isgt, bool iseq)
 	Oid			consttype;
 	double		selec;
 
+	/*
+	 * Handle (variable < variable) and (variable <= variable) with the same
+	 * variable on both sides as a special case. The strict inequality should
+	 * not match anything, hence selectivity is 0.0. The other case is about
+	 * the same as equality, so selectivity is 1.0.
+	 */
+	if (matching_restriction_variables(root, args, varRelid))
+	{
+		if (iseq)
+			PG_RETURN_FLOAT8(1.0);
+		else
+			PG_RETURN_FLOAT8(0.0);
+	}
+
 	/*
 	 * If expression is not variable op something or something op variable,
 	 * then punt and return a default estimate.
@@ -4871,6 +4894,56 @@ get_restriction_variable(PlannerInfo *root, List *args, int varRelid,
 	return false;
 }
 
+
+/*
+ * matching_restriction_variable
+ *		Examine the args of a restriction clause to see if it's of the
+ *		form (variable op variable) with the save variable on both sides.
+ *
+ * Inputs:
+ *	root: the planner info
+ *	args: clause argument list
+ *	varRelid: see specs for restriction selectivity functions
+ *
+ * Returns true if the same variable is on both sides, otherwise false.
+ */
+static bool
+matching_restriction_variables(PlannerInfo *root, List *args, int varRelid)
+{
+	Node	   *left,
+			   *right;
+	VariableStatData ldata;
+	VariableStatData rdata;
+	bool		res = false;
+
+	/* Fail if not a binary opclause (probably shouldn't happen) */
+	if (list_length(args) != 2)
+		return false;
+
+	left = (Node *) linitial(args);
+	right = (Node *) lsecond(args);
+
+	/*
+	 * Examine both sides.  Note that when varRelid is nonzero, Vars of other
+	 * relations will be treated as pseudoconstants.
+	 */
+	examine_variable(root, left, varRelid, );
+	examine_variable(root, right, varRelid, );
+
+	/*
+	 * If both sides are variable, and are equal, we win.
+	 */
+	if ((ldata.rel != NULL && rdata.rel 

Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-20 Thread Robert Haas
On Fri, Aug 20, 2021 at 3:32 PM Tomas Vondra
 wrote:
> Yeah, I agree this seems like the right approach (except I guess you
> meant "a != a" and not "a != 0").

Err, yes.

> Assuming we want to do something about
> these clauses at all - I'm still wondering if those clauses are common
> in practice or just synthetic.

Well, they are certainly less common than some things, but query
generators do a lot of wonky things.

Also, as a practical matter, it might be cheaper to do something about
them than to not do something about them. I don't really understand
the mechanism of operation of the patch, but I guess if somebody
writes "WHERE a = b", one thing you could do would be check whether
any of the MCVs for a are also MCVs for b, and if so you could
estimate something on that basis. If you happened to have extended
statistics for (a, b) then I guess you could do even better using, uh,
math, or something. But all of that sounds like hard work, and
checking whether "a" happens to be the same as "b" sounds super-cheap
by comparison.

If, as normally will be the case, the two sides are not the same, you
haven't really lost anything, because the expenditure of cycles to
test varno and varattno for equality must be utterly trivial in
comparison with fetching stats data and looping over MCV lists and
things. But if on occasion you find out that they are the same, then
you win! You can give a more accurate estimate with less computational
work.

-- 
Robert Haas
EDB: http://www.enterprisedb.com




Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-20 Thread Tomas Vondra




On 8/20/21 8:56 PM, Robert Haas wrote:

On Fri, Aug 20, 2021 at 2:21 PM Tomas Vondra
 wrote:

After looking at this for a while, it's clear the main issue is handling
of clauses referencing the same Var twice, like for example (a = a) or
(a < a). But it's not clear to me if this is something worth fixing, or
if extended statistics is the right place to do it.

If those clauses are worth the effort, why not to handle them better
even without extended statistics? We can easily evaluate these clauses
on per-column MCV, because they only reference a single Var.


+1.

It seems to me that what we ought to do is make "a < a", "a > a", and
"a != 0" all have an estimate of zero, and make "a <= a", "a >= a",
and "a = a" estimate 1-nullfrac. The extended statistics mechanism can
just ignore the first three types of clauses; the zero estimate has to
be 100% correct. It can't necessarily ignore the second three cases,
though. If the query says "WHERE a = a AND b = 1", "b = 1" may be more
or less likely given that a is known to be not null, and extended
statistics can tell us that.



Yeah, I agree this seems like the right approach (except I guess you 
meant "a != a" and not "a != 0"). Assuming we want to do something about 
these clauses at all - I'm still wondering if those clauses are common 
in practice or just synthetic.



regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-20 Thread Tomas Vondra

On 8/18/21 12:43 PM, Tomas Vondra wrote:

Hi Mark,

This thread inspired me to do something fairly similar - a generator 
that generates queries of varying complexity, executes them on table 
with and without extended statistics. I've been thinking about that 
before, but this finally pushed me to do that, and some of the results 
are fairly interesting ...


I've pushed everything (generator and results) to this github repo:

   https://github.com/tvondra/stats-test

with a summary of all results here:

   https://github.com/tvondra/stats-test/blob/master/results.md



FWIW I've pushed slightly reworked scripts and results - there are 
results from two machines - xeon and i5. Xeon is mostly the same as 
before, with some minor fixes, while i5 is does not allow clauses 
referencing the same column twice (per discussion in this thread).


I think there was a bug in the original plot script, combining incorrect 
data series in some cases, causing (at least) some of the strange 
patterns mentioned.


I've also made the charts easier to read by splitting the cases into 
separate plots and using transparency. I've also added png version back, 
because plotting the .svg is quite slow.



regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-20 Thread Robert Haas
On Fri, Aug 20, 2021 at 2:21 PM Tomas Vondra
 wrote:
> After looking at this for a while, it's clear the main issue is handling
> of clauses referencing the same Var twice, like for example (a = a) or
> (a < a). But it's not clear to me if this is something worth fixing, or
> if extended statistics is the right place to do it.
>
> If those clauses are worth the effort, why not to handle them better
> even without extended statistics? We can easily evaluate these clauses
> on per-column MCV, because they only reference a single Var.

+1.

It seems to me that what we ought to do is make "a < a", "a > a", and
"a != 0" all have an estimate of zero, and make "a <= a", "a >= a",
and "a = a" estimate 1-nullfrac. The extended statistics mechanism can
just ignore the first three types of clauses; the zero estimate has to
be 100% correct. It can't necessarily ignore the second three cases,
though. If the query says "WHERE a = a AND b = 1", "b = 1" may be more
or less likely given that a is known to be not null, and extended
statistics can tell us that.

-- 
Robert Haas
EDB: http://www.enterprisedb.com




Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-20 Thread Mark Dilger



> On Aug 20, 2021, at 11:20 AM, Tomas Vondra  
> wrote:
> 
> I think we can either reject the patch, which would mean we don't consider 
> (Var op Var) clauses to be common/important enough. Or we need to improve the 
> existing selectivity functions (even those without extended statistics) to 
> handle those clauses in a smarter way. Otherwise there'd be 
> strange/surprising inconsistencies.

For datatypes with very few distinct values (bool, some enums, etc.) keeping an 
mcv list of (a,b) pairs seems helpful.  The patch may be worth keeping for such 
cases.  In other cases, I don't much see the point.

It seems that sampling the fraction of rows where (A op B) is true for any 
given op would be more helpful.
 
—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company







Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-20 Thread Tomas Vondra

On 8/18/21 3:16 PM, Mark Dilger wrote:




On Aug 18, 2021, at 3:43 AM, Tomas Vondra
 wrote:

I've pushed everything (generator and results) to this github repo


Thanks for the link.  I took a very brief look.  Perhaps we can
combine efforts.  I need to make progress on several other patches
first, but hope to get back to this.



Sure - it'd be great to combine efforts. That's why I posted my scripts 
& results. I understand there's plenty other work for both of us, so 
take your time - no rush.


After looking at this for a while, it's clear the main issue is handling 
of clauses referencing the same Var twice, like for example (a = a) or 
(a < a). But it's not clear to me if this is something worth fixing, or 
if extended statistics is the right place to do it.


If those clauses are worth the effort, why not to handle them better 
even without extended statistics? We can easily evaluate these clauses 
on per-column MCV, because they only reference a single Var.


It'd be rather strange if for example

select * from t where (a < a)

is mis-estimated simply because it can't use extended statistics 
(there's just a single Var, so we won't consider extended stats), while


select * from t where (a < a) and b = 1

suddenly gets much better thanks to extended stats on (a,b), even when 
(a,b) are perfectly independent.


So I think we better make eqsel/ineqsel smarter about estimating those 
clauses, assuming we consider them important enough.



I think we can either reject the patch, which would mean we don't 
consider (Var op Var) clauses to be common/important enough. Or we need 
to improve the existing selectivity functions (even those without 
extended statistics) to handle those clauses in a smarter way. Otherwise 
there'd be strange/surprising inconsistencies.



regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-18 Thread Mark Dilger



> On Aug 18, 2021, at 3:43 AM, Tomas Vondra  
> wrote:
> 
> I've pushed everything (generator and results) to this github repo

Thanks for the link.  I took a very brief look.  Perhaps we can combine 
efforts.  I need to make progress on several other patches first, but hope to 
get back to this.

—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company







Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-18 Thread Tomas Vondra

Hi Mark,

This thread inspired me to do something fairly similar - a generator 
that generates queries of varying complexity, executes them on table 
with and without extended statistics. I've been thinking about that 
before, but this finally pushed me to do that, and some of the results 
are fairly interesting ...


I've pushed everything (generator and results) to this github repo:

  https://github.com/tvondra/stats-test

with a summary of all results here:

  https://github.com/tvondra/stats-test/blob/master/results.md


Some basic facts about the generator.py (see query_generator):

* It's using a fixed seed to make it deterministic.

* A small fraction of generated queries is sampled and executed (5%).

* Thanks to a fixed seed we generate/sample the same set of queries for 
different runs, which allows us to compare runs easily.


* The queries use 2 - 5 clauses, either (Var op Const) or (Var op Var).

* The operators are the usual equality/inequality ones.

* The clauses are combined using AND/OR (also randomly picked).

* There's a random set of parens added, to vary the operator precedence 
(otherwise it'd be driven entirely by AND/OR).


* There are two datasets - a random and correlated one, with different 
number of distinct values in each column (10, 100, 1000, 1).


* The statistics target is set to 10, 100, 1000, 1.


It's a bit hacky, with various bits hard-coded at the moment. But it 
could be extended to do other stuff fairly easily, I think.


Anyway, the repository contains results for three cases:

1) master
2) patched: master with the (Var op Var) patch
3) fixed: patched, with a fix for "simple" clauses (a crude patch)

And for each case we have three row counts:

* actual (from explain analyze)
* estimate without extended stats
* estimate with extended stats

And then we can calculate "estimation error" as

  estimate / actual

both with and without statistics. Results for two cases can be plotted 
as a scatter plot, with the two estimation errors as (x,y) values. The 
idea is that this shows how a patch affects estimates - a point (100,10) 
means that it was 100x over-estimated, and with the patch it's just 10x, 
and similarly for other points.


This is what the charts at

  https://github.com/tvondra/stats-test/blob/master/results.md

do, for each combination of parameters (dataset, statistics target and 
number of distinct values). There's one chart without extended stats, 
one with extended stats.


An "ideal" chart would look like like a single point (1,1) which means 
"accurate estimates without/with patch", or (?,1) which means "poor 
estimates before, accurate estimates now". Diagonal means "no change".


In principle, we expect the charts to look like this:

* random: diagonal charts, because there should be no extended stats 
built, hence no impact on estimates is expected


* correlated: getting closer to 1.0, which looks like a horizontal line 
in the chart


Consider for example this:

https://github.com/tvondra/stats-test/raw/master/correlated-1000-10.png

which clearly shows that the first patch is almost exactly the same as 
master, while with the fix the estimates improve significantly (and are 
almost perfect), at least with the statistics.


Without stats there's a bunch of queries that suddenly get from 
"perfect" to much worse (looks like a vertical line on the left chart).


But there are other "strange" cases with "interesting patterns", like 
for example


* 
https://raw.githubusercontent.com/tvondra/stats-test/master/correlated-100-100.png


* 
https://raw.githubusercontent.com/tvondra/stats-test/master/correlated-1000-100.png


* 
https://raw.githubusercontent.com/tvondra/stats-test/master/random-1-10.png


This likely shows the patches are a significant improvement for some 
queries (either getting better than master, or even making the estimates 
pretty accurate). But it's probably worth looking into the queries that 
got worse, etc.



regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
>From 4abb267dfdd46e5c2157ff6d201c11d068331519 Mon Sep 17 00:00:00 2001
From: Tomas Vondra 
Date: Sun, 15 Aug 2021 13:23:13 +0200
Subject: [PATCH 1/2] patch 20210720

---
 src/backend/optimizer/path/clausesel.c|  37 +++-
 src/backend/statistics/extended_stats.c   |  83 ++---
 src/backend/statistics/mcv.c  | 172 +-
 .../statistics/extended_stats_internal.h  |   4 +-
 src/test/regress/expected/stats_ext.out   |  96 ++
 src/test/regress/sql/stats_ext.sql|  26 +++
 6 files changed, 341 insertions(+), 77 deletions(-)

diff --git a/src/backend/optimizer/path/clausesel.c b/src/backend/optimizer/path/clausesel.c
index d263ecf082..6a7e9ceea5 100644
--- a/src/backend/optimizer/path/clausesel.c
+++ b/src/backend/optimizer/path/clausesel.c
@@ -714,6 +714,7 @@ clause_selectivity_ext(PlannerInfo *root,
 	Selectivity s1 = 0.5;		/* default for any 

Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-12 Thread Mark Dilger



> On Aug 11, 2021, at 3:48 PM, Mark Dilger  wrote:
> 
> I'm working on a correlated stats test as I write this.  I'll get back to you 
> when I have results.

Ok, the tests showed no statistically significant regressions.  All tests 
included the same sorts of whereclause expressions as used in the tests from 
yesterday's email.

The first test created loosely correlated data and found no significant row 
estimate improvements or regressions.

The second test of more tightly correlated data showed a row estimate 
improvement overall, with no class of whereclause showing an estimate 
regression.  I think the apparent regressions from yesterday were just 
statistical noise.

—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company







Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-11 Thread Tomas Vondra

On 8/12/21 12:48 AM, Mark Dilger wrote:




On Aug 11, 2021, at 3:45 PM, Tomas Vondra
 wrote:

As I said in my last reply, I'm not sure it's particularly useful
to look at overall results from data sets with independent columns.
That's not what extended statistics are for, and people should not
create them in those cases ...


We sent our last emails more or less simultaneously.  I'm not
ignoring your email; I just hadn't seen it yet when I sent mine.



Apologies, I didn't mean to imply you're ignoring my messages.


Maybe it'd be better to focus on cases with the largest difference
in estimates, and investigate those more closely.


Yeah, I'm working on a correlated stats test as I write this.  I'll
get back to you when I have results.



Cool, thanks!


regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-11 Thread Mark Dilger



> On Aug 11, 2021, at 3:45 PM, Tomas Vondra  
> wrote:
> 
> As I said in my last reply, I'm not sure it's particularly useful to look at 
> overall results from data sets with independent columns. That's not what 
> extended statistics are for, and people should not create them in those cases 
> ...

We sent our last emails more or less simultaneously.  I'm not ignoring your 
email; I just hadn't seen it yet when I sent mine.

> Maybe it'd be better to focus on cases with the largest difference in 
> estimates, and investigate those more closely.

Yeah, I'm working on a correlated stats test as I write this.  I'll get back to 
you when I have results.

—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company







Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-11 Thread Tomas Vondra




On 8/12/21 12:02 AM, Mark Dilger wrote:


...

Once the data is made deterministic, the third set looks slightly
better than the first, rather than slightly worse.  But almost 20% of
the query types still look worse after applying the patch.  I'm going to
dig deeper into those to see if that conclusion survives bumping up the
size of the dataset.  It will take quite some time to run the tests with
a huge dataset, but I don't see how else to investigate this.



As I said in my last reply, I'm not sure it's particularly useful to 
look at overall results from data sets with independent columns. That's 
not what extended statistics are for, and people should not create them 
in those cases ...


Maybe it'd be better to focus on cases with the largest difference in 
estimates, and investigate those more closely.



regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-11 Thread Mark Dilger



> On Aug 11, 2021, at 10:38 AM, Tomas Vondra  
> wrote:
> 
> So I'm a bit puzzled about the claim that random data make the problems more 
> extreme. Can you explain?

Hmm... you appear to be right.

I changed the gentest.pl script to fill the tables with randomized data, but 
the random data is being regenerated each test run (since the calls to random() 
are in the gentest.sql file).  Adding an explicit setseed() call in the test to 
make sure the data is the same before and after applying your patch eliminates 
the differences.

So there are three tests here.  The first tests deterministic orderly data.  
The second tests deterministic random data without repeats and hence without 
meaningful mvc.  The third tests deterministic random data with rounding into 
twenty buckets skewed towards lower numbered buckets and hence with both 
repeats and meaningful mvc.

The original test set:

TOTAL:
better: 77827
worse: 12317

The random test set, with setseed() calls to make it deterministic:

TOTAL:
better: 49708
worse: 19393

The random test set , with setseed() calls to make it deterministic plus 
rounding into buckets:

TOTAL:
better: 81764
worse: 19594

Once the data is made deterministic, the third set looks slightly better than 
the first, rather than slightly worse.  But almost 20% of the query types still 
look worse after applying the patch.  I'm going to dig deeper into those to see 
if that conclusion survives bumping up the size of the dataset.  It will take 
quite some time to run the tests with a huge dataset, but I don't see how else 
to investigate this.


—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company







Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-11 Thread Tomas Vondra



On 8/11/21 5:17 PM, Mark Dilger wrote:




On Aug 11, 2021, at 7:51 AM, Mark Dilger  wrote:

I'll go test random data designed to have mcv lists of significance


Done.  The data for column_i is set to floor(random()^i*20). 
column_1 therefore is evenly distributed between 0..19, with

successive columns weighted more towards smaller values.

This still gives (marginally) worse results than the original test I
posted, but better than the completely random data from the last post.
After the patch, 72294 estimates got better and 30654 got worse.  The
biggest losers from this data set are:

better:0, worse:31:  A >= B or A = A or not A = A
better:0, worse:31:  A >= B or A = A
better:0, worse:31:  A >= B or not A <> A
better:0, worse:31:  A >= A or A = B or not B = A
better:0, worse:31:  A >= B and not A < A or A = A
better:0, worse:31:  A = A or not A > B or B <> A
better:0, worse:31:  A >= B or not A <> A or not A >= A
better:0, worse:32:  B < A and B > C and not C < B<
better:1, worse:65:  A <> C and A <= B  <
better:0, worse:33:  B <> A or B >= B
better:0, worse:33:  B <> A or B <= B
better:0, worse:33:  B <= A or B = B or not B > B
better:0, worse:33:  B <> A or not B >= B or not B < B
better:0, worse:33:  B = A or not B > B or B = B
better:0, worse:44:  A = B or not A > A or A = A
better:0, worse:44:  A <> B or A <= A
better:0, worse:44:  A <> B or not A >= A or not A < A
better:0, worse:44:  A <= B or A = A or not A > A
better:0, worse:44:  A <> B or A >= A

Of which, a few do not contain columns compared against themselves,
marked with < above.

I don't really know what to make of these results.  It doesn't 
bother me that any particular estimate gets worse after the patch.

That's just the nature of estimating.  But it does bother me a bit
that some types of estimates consistently get worse.  We should
either show that my analysis is wrong about that, or find a way to
address it to avoid performance regressions.  If I'm right that there
are whole classes of estimates that are made consistently worse, then
it stands to reason some users will have those data distributions and
queries, and could easily notice.


I'm not quite sure that's really a problem. Extended statistics are 
meant for correlated columns, and it's mostly expected the estimates may 
be a bit worse for random / independent data. The idea is mostly that 
statistics will be created only for correlated columns, in which case it 
should improve the estimates. I'd be way more concerned if you observed 
consistently worse estimates on such data set.


Of course, there may be errors - the incorrect handling of (A op A) is 
an example of such issue, probably.



regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-11 Thread Tomas Vondra



On 8/11/21 4:51 PM, Mark Dilger wrote:




On Aug 11, 2021, at 5:08 AM, Dean Rasheed  wrote:

This feels like rather an artificial example though. Is there any real
use for this sort of clause?


The test generated random combinations of clauses and then checked if
any had consistently worse performance.  These came up.  I don't
know that they represent anything real.

What was not random in the tests was the data in the tables.  I've 
gotten curious if these types of clauses (with columns compared

against themselves) would still be bad for random rather than orderly
data sets. I'll go check >
testing

Wow.  Randomizing the data makes the problems even more extreme.  It

seems my original test set was actually playing to this patch's
strengths, not its weaknesses.  I've changed the columns to double
precision and filled the columns with random() data, where column1 gets
random()^1, column2 gets random()^2, etc.  So on average the larger
numbered columns will be smaller, and the mcv list will be irrelevant,
since values should not tend to repeat.




I tried using the same randomized data set, i.e. essentially

  create statistics s (mcv) on a, b, c from t;

  insert into t

  select random(), pow(random(), 2), pow(random(), 3), pow(random(),4)
from generate_series(1,100) s(i);

  create statistics s (mcv) on a, b, c from t;

But I don't see any difference compared to the estimates without 
extended statistics, which is not surprising because there should be no 
MCV list built. So I'm a bit puzzled about the claim that random data 
make the problems more extreme. Can you explain?


regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-11 Thread Tomas Vondra




On 8/11/21 4:51 PM, Mark Dilger wrote:




On Aug 11, 2021, at 5:08 AM, Dean Rasheed  wrote:

This feels like rather an artificial example though. Is there any real
use for this sort of clause?


The test generated random combinations of clauses and then checked if any had 
consistently worse performance.  These came up.  I don't know that they 
represent anything real.

What was not random in the tests was the data in the tables.  I've gotten 
curious if these types of clauses (with columns compared against themselves) 
would still be bad for random rather than orderly data sets.  I'll go check

testing

Wow.  Randomizing the data makes the problems even more extreme.  It seems my 
original test set was actually playing to this patch's strengths, not its 
weaknesses.  I've changed the columns to double precision and filled the 
columns with random() data, where column1 gets random()^1, column2 gets 
random()^2, etc.  So on average the larger numbered columns will be smaller, 
and the mcv list will be irrelevant, since values should not tend to repeat.

Over all queries, 47791 have better estimates after the patch, but 34802 had 
worse estimates after the patch (with the remaining 17407 queries having 
roughly equal quality).

The worst estimates are still ones that have a column compared to itself:

better:0, worse:33:  A <= B or A <= A or A <= A
better:0, worse:33:  A <= B or A = A or not A <> A
better:0, worse:33:  A <= B or A >= A or not A <> A
better:0, worse:33:  A <> B or A <= A
better:0, worse:33:  A <> B or A <= A or A <> A
better:0, worse:33:  A <> B or A <= A or A >= A
better:0, worse:33:  A <> B or A <= A or not A = A
better:0, worse:33:  A <> B or A > A or not A < A
better:0, worse:33:  A <> B or A >= A
better:0, worse:33:  A <> B or A >= A and A <= A
better:0, worse:33:  A = B or not A > A or not A > A
better:0, worse:33:  A >= B or not A <> A or A = A
better:0, worse:39:  B <= A or B <= B or B <= B
better:0, worse:39:  B <= A or B = B or not B <> B
better:0, worse:39:  B <= A or B >= B or not B <> B
better:0, worse:39:  B <> A or B <= B
better:0, worse:39:  B <> A or B <= B or B <> B
better:0, worse:39:  B <> A or B <= B or B >= B
better:0, worse:39:  B <> A or B <= B or not B = B
better:0, worse:39:  B <> A or B > B or not B < B
better:0, worse:39:  B <> A or B >= B
better:0, worse:39:  B <> A or B >= B and B <= B
better:0, worse:39:  B = A or not B > B or not B > B
better:0, worse:39:  B >= A or not B <> B or B = B



The other interesting thing all those clauses have in common is that 
they're OR clauses. And we handle that a bit differently. But I think 
the "strange" clauses with the same Var on both sides is the main issue, 
and not detecting them as "simple" clauses should fix that.



But there are plenty that got worse without that, such as the following 
examples:

better:25, worse:39:  A < B and A < B or B > A
better:10, worse:48:  A < B and A < C
better:10, worse:54:  A < B and A < C or C > A

I'll go test random data designed to have mcv lists of significance



Hard to say without having a look at the data set, but there'll always 
be cases where the extended stats perform a bit worse, due to (a) luck 
and (b) the stats covering only small fraction of the table.


But of course, it's worth investigating the suspicious cases.


regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-11 Thread Mark Dilger



> On Aug 11, 2021, at 7:51 AM, Mark Dilger  wrote:
> 
> I'll go test random data designed to have mcv lists of significance

Done.  The data for column_i is set to floor(random()^i*20).  column_1 
therefore is evenly distributed between 0..19, with successive columns weighted 
more towards smaller values.

This still gives (marginally) worse results than the original test I posted, 
but better than the completely random data from the last post.  After the 
patch, 72294 estimates got better and 30654 got worse.  The biggest losers from 
this data set are:

better:0, worse:31:  A >= B or A = A or not A = A
better:0, worse:31:  A >= B or A = A
better:0, worse:31:  A >= B or not A <> A
better:0, worse:31:  A >= A or A = B or not B = A
better:0, worse:31:  A >= B and not A < A or A = A
better:0, worse:31:  A = A or not A > B or B <> A
better:0, worse:31:  A >= B or not A <> A or not A >= A
better:0, worse:32:  B < A and B > C and not C < B<
better:1, worse:65:  A <> C and A <= B  <
better:0, worse:33:  B <> A or B >= B
better:0, worse:33:  B <> A or B <= B
better:0, worse:33:  B <= A or B = B or not B > B
better:0, worse:33:  B <> A or not B >= B or not B < B
better:0, worse:33:  B = A or not B > B or B = B
better:0, worse:44:  A = B or not A > A or A = A
better:0, worse:44:  A <> B or A <= A
better:0, worse:44:  A <> B or not A >= A or not A < A
better:0, worse:44:  A <= B or A = A or not A > A
better:0, worse:44:  A <> B or A >= A

Of which, a few do not contain columns compared against themselves, marked with 
< above.

I don't really know what to make of these results.  It doesn't bother me that 
any particular estimate gets worse after the patch.  That's just the nature of 
estimating.  But it does bother me a bit that some types of estimates 
consistently get worse.  We should either show that my analysis is wrong about 
that, or find a way to address it to avoid performance regressions.  If I'm 
right that there are whole classes of estimates that are made consistently 
worse, then it stands to reason some users will have those data distributions 
and queries, and could easily notice.

—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company







Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-11 Thread Tomas Vondra




On 8/11/21 2:08 PM, Dean Rasheed wrote:

On Wed, 11 Aug 2021 at 00:05, Tomas Vondra
 wrote:


So with the statistics, the estimate gets a bit worse. The reason is
fairly simple - if you look at the two parts of the OR clause, we get this:

  clause   actualno statswith stats
 ---
  (A < B and A <> A)0  331667 1
  not (A < A) 100  3333

This clearly shows that the first clause is clearly improved, while the
(A < A) is estimated the same way, because the clause has a single Var
so it's considered to be "simple" so we ignore the MCV selectivity and
just use the simple_sel calculated by clause_selectivity_ext.

And the 33 and 331667 just happen to be closer to the actual row
count. But that's mostly by luck, clearly.

But now that I think about it, maybe the problem really is in how
statext_mcv_clauselist_selectivity treats this clause - the definition
of "simple" clauses as "has one attnum" was appropriate when only
clauses (Var op Const) were supported. But with (Var op Var) that's
probably not correct anymore.



Hmm, interesting. Clearly the fact that the combined estimate without
extended stats was better was just luck, based on it's large
overestimate of the first clause. But it's also true that a (Var op
Var) clause should not be treated as simple, because "simple" in this
context is meant to be for clauses that are likely to be better
estimated with regular stats, whereas in this case, extended stats
would almost certainly do better on the second clause.


I don't see why extended stats would do better on the second clause. I 
mean, if you have (A < A) then extended stats pretty much "collapse" 
into per-column stats. We could get almost the same estimate on 
single-column MCV list, etc. The reason why that does not happen is that 
we just treat it as a range clause, and assign it a default 33% estimate.


But we could make that a bit smarter, and assign better estimates to 
those clauses


  (A < A) => 0.0
  (A = A) => 1.0
  (A <= A) => 1.0

And that'd give us the same estimates, I think. Not sure that's worth 
it, because (A op A) clauses are probably very rare, OTOH it's cheap.




Perhaps the easiest way to identify simple clauses would be in
statext_is_compatible_clause(), rather than the way it's done now,
because it has the relevant information at hand, so it could be made
to return an extra flag.



Agreed, that seems like a better place to fix this.


This feels like rather an artificial example though. Is there any real
use for this sort of clause?



True. It seems a bit artificial, which is understandable as it came from 
a synthetic test generating all possible clauses. OTOH, fixing it seems 
fairly cheap ...



regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-11 Thread Mark Dilger



> On Aug 11, 2021, at 5:08 AM, Dean Rasheed  wrote:
> 
> This feels like rather an artificial example though. Is there any real
> use for this sort of clause?

The test generated random combinations of clauses and then checked if any had 
consistently worse performance.  These came up.  I don't know that they 
represent anything real.

What was not random in the tests was the data in the tables.  I've gotten 
curious if these types of clauses (with columns compared against themselves) 
would still be bad for random rather than orderly data sets.  I'll go check

   testing

Wow.  Randomizing the data makes the problems even more extreme.  It seems my 
original test set was actually playing to this patch's strengths, not its 
weaknesses.  I've changed the columns to double precision and filled the 
columns with random() data, where column1 gets random()^1, column2 gets 
random()^2, etc.  So on average the larger numbered columns will be smaller, 
and the mcv list will be irrelevant, since values should not tend to repeat.

Over all queries, 47791 have better estimates after the patch, but 34802 had 
worse estimates after the patch (with the remaining 17407 queries having 
roughly equal quality).

The worst estimates are still ones that have a column compared to itself:

better:0, worse:33:  A <= B or A <= A or A <= A
better:0, worse:33:  A <= B or A = A or not A <> A
better:0, worse:33:  A <= B or A >= A or not A <> A
better:0, worse:33:  A <> B or A <= A
better:0, worse:33:  A <> B or A <= A or A <> A
better:0, worse:33:  A <> B or A <= A or A >= A
better:0, worse:33:  A <> B or A <= A or not A = A
better:0, worse:33:  A <> B or A > A or not A < A
better:0, worse:33:  A <> B or A >= A
better:0, worse:33:  A <> B or A >= A and A <= A
better:0, worse:33:  A = B or not A > A or not A > A
better:0, worse:33:  A >= B or not A <> A or A = A
better:0, worse:39:  B <= A or B <= B or B <= B
better:0, worse:39:  B <= A or B = B or not B <> B
better:0, worse:39:  B <= A or B >= B or not B <> B
better:0, worse:39:  B <> A or B <= B
better:0, worse:39:  B <> A or B <= B or B <> B
better:0, worse:39:  B <> A or B <= B or B >= B
better:0, worse:39:  B <> A or B <= B or not B = B
better:0, worse:39:  B <> A or B > B or not B < B
better:0, worse:39:  B <> A or B >= B
better:0, worse:39:  B <> A or B >= B and B <= B
better:0, worse:39:  B = A or not B > B or not B > B
better:0, worse:39:  B >= A or not B <> B or B = B

But there are plenty that got worse without that, such as the following 
examples:

better:25, worse:39:  A < B and A < B or B > A
better:10, worse:48:  A < B and A < C
better:10, worse:54:  A < B and A < C or C > A

I'll go test random data designed to have mcv lists of significance

—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company







Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-11 Thread Dean Rasheed
On Wed, 11 Aug 2021 at 00:05, Tomas Vondra
 wrote:
>
> So with the statistics, the estimate gets a bit worse. The reason is
> fairly simple - if you look at the two parts of the OR clause, we get this:
>
>  clause   actualno statswith stats
> ---
>  (A < B and A <> A)0  331667 1
>  not (A < A) 100  3333
>
> This clearly shows that the first clause is clearly improved, while the
> (A < A) is estimated the same way, because the clause has a single Var
> so it's considered to be "simple" so we ignore the MCV selectivity and
> just use the simple_sel calculated by clause_selectivity_ext.
>
> And the 33 and 331667 just happen to be closer to the actual row
> count. But that's mostly by luck, clearly.
>
> But now that I think about it, maybe the problem really is in how
> statext_mcv_clauselist_selectivity treats this clause - the definition
> of "simple" clauses as "has one attnum" was appropriate when only
> clauses (Var op Const) were supported. But with (Var op Var) that's
> probably not correct anymore.
>

Hmm, interesting. Clearly the fact that the combined estimate without
extended stats was better was just luck, based on it's large
overestimate of the first clause. But it's also true that a (Var op
Var) clause should not be treated as simple, because "simple" in this
context is meant to be for clauses that are likely to be better
estimated with regular stats, whereas in this case, extended stats
would almost certainly do better on the second clause.

Perhaps the easiest way to identify simple clauses would be in
statext_is_compatible_clause(), rather than the way it's done now,
because it has the relevant information at hand, so it could be made
to return an extra flag.

This feels like rather an artificial example though. Is there any real
use for this sort of clause?

Regards,
Dean




Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-10 Thread Tomas Vondra

On 8/9/21 9:19 PM, Mark Dilger wrote:




On Jul 20, 2021, at 11:28 AM, Tomas Vondra  
wrote:

Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
<0001-Handling-Expr-op-Expr-clauses-in-extended-stats-20210720.patch>


Hi Tomas,

I tested this patch against master looking for types of clauses that uniformly 
get worse with the patch applied.  I found some.

The tests are too large to attach, but the scripts that generate them are not.  
To perform the tests:

git checkout master
perl ./gentest.pl > src/test/regress/sql/gentest.sql
cat /dev/null > src/test/regress/expected/gentest.out
echo "test: gentest" >> src/test/regress/parallel_schedule
./configure && make && make check
cp src/test/regress/results/gentest.out 
src/test/regress/expected/gentest.out
patch -p 1 < 
0001-Handling-Expr-op-Expr-clauses-in-extended-stats-20210720.patch
make check
cat src/test/regress/regression.diffs | perl ./check.pl

This shows patterns of conditions that get worse, such as:

better:0, worse:80:  A < B and A <> A or not A < A
better:0, worse:80:  A < B and not A <= A or A <= A
better:0, worse:80:  A < B or A = A
better:0, worse:80:  A < B or A = A or not A >= A
better:0, worse:80:  A < B or A >= A
better:0, worse:80:  A < B or A >= A and not A <> A
better:0, worse:80:  A < B or not A < A
better:0, worse:80:  A < B or not A <> A
better:0, worse:80:  A < B or not A <> A or A <= A
better:0, worse:80:  A < B or not A >= A or not A < A

It seems things get worse when the conditions contain a column compared against 
itself.  I suspect that is being handled incorrectly.



Thanks for this testing!

I took a quick look, and I think this is mostly due to luck in how the 
(default) range estimates combine without and with extended statistics. 
Consider for example this simple example:


create table t (a int, b int);

insert into t select mod(i,10), mod(i,20)
from generate_series(1,100) s(i);

Without stats, the first clauses example is estimated like this:

explain (timing off, analyze) select * from t
  where (A < B and A <> A) or not A < A;

QUERY PLAN
--
 Seq Scan on t  (cost=0.00..21925.00 rows=55 width=8)
(actual rows=100 loops=1)
   Filter: (((a < b) AND (a <> a)) OR (a >= a))
 Planning Time: 0.054 ms
 Execution Time: 80.485 ms
(4 rows)

and with MCV on (a,b) it gets estimates like this:

 QUERY PLAN 


--
 Seq Scan on t  (cost=0.00..21925.00 rows=33 width=8)
(actual rows=100 loops=1)
   Filter: (((a < b) AND (a <> a)) OR (a >= a))
 Planning Time: 0.152 ms
 Execution Time: 79.917 ms
(4 rows)

So with the statistics, the estimate gets a bit worse. The reason is 
fairly simple - if you look at the two parts of the OR clause, we get this:


clause   actualno statswith stats
   ---
(A < B and A <> A)0  331667 1
not (A < A) 100  3333

This clearly shows that the first clause is clearly improved, while the 
(A < A) is estimated the same way, because the clause has a single Var 
so it's considered to be "simple" so we ignore the MCV selectivity and 
just use the simple_sel calculated by clause_selectivity_ext.


And the 33 and 331667 just happen to be closer to the actual row 
count. But that's mostly by luck, clearly.


But now that I think about it, maybe the problem really is in how 
statext_mcv_clauselist_selectivity treats this clause - the definition 
of "simple" clauses as "has one attnum" was appropriate when only 
clauses (Var op Const) were supported. But with (Var op Var) that's 
probably not correct anymore.


And indeed, commenting out the if condition on line 1933 (so ignoring 
simple_sel) and that does improve the estimates for this query. But 
perhaps I'm missing something, this needs more thought.



regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Use extended statistics to estimate (Var op Var) clauses

2021-08-09 Thread Mark Dilger


> On Jul 20, 2021, at 11:28 AM, Tomas Vondra  
> wrote:
> 
> Tomas Vondra
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
> <0001-Handling-Expr-op-Expr-clauses-in-extended-stats-20210720.patch>

Hi Tomas,

I tested this patch against master looking for types of clauses that uniformly 
get worse with the patch applied.  I found some.

The tests are too large to attach, but the scripts that generate them are not.  
To perform the tests:

git checkout master
perl ./gentest.pl > src/test/regress/sql/gentest.sql
cat /dev/null > src/test/regress/expected/gentest.out
echo "test: gentest" >> src/test/regress/parallel_schedule
./configure && make && make check
cp src/test/regress/results/gentest.out 
src/test/regress/expected/gentest.out
patch -p 1 < 
0001-Handling-Expr-op-Expr-clauses-in-extended-stats-20210720.patch
make check
cat src/test/regress/regression.diffs | perl ./check.pl

This shows patterns of conditions that get worse, such as:

better:0, worse:80:  A < B and A <> A or not A < A
better:0, worse:80:  A < B and not A <= A or A <= A
better:0, worse:80:  A < B or A = A
better:0, worse:80:  A < B or A = A or not A >= A
better:0, worse:80:  A < B or A >= A
better:0, worse:80:  A < B or A >= A and not A <> A
better:0, worse:80:  A < B or not A < A
better:0, worse:80:  A < B or not A <> A
better:0, worse:80:  A < B or not A <> A or A <= A
better:0, worse:80:  A < B or not A >= A or not A < A

It seems things get worse when the conditions contain a column compared against 
itself.  I suspect that is being handled incorrectly.

#!/usr/bin/perl

use strict;
use warnings;

my ($query, $where, $before, $after, @before, @after);
my $better = 0;
my $worse = 0;
my %where = ();
my %gripe;
while (<>)
{
 	if (/from check_estimated_rows\('(.*)'\)/)
	{
		$query = $1;
		if ($query =~ m/where (.*)$/)
		{
			$where = $1;
			my %columns = map { $_ => 1 } ($where =~ m/(column_\d+)/g);
			my @normal = ('A'..'Z');
			my @columns = sort keys %columns;
			for my $i (0..$#columns)
			{
my $old = $columns[$i];
my $new = $normal[$i];
$where =~ s/\b$old\b/$new/g;
			}
		}
	}
	elsif (m/^-(\s*(\d+)\s*\|\s*(\d+)\s*\|\s*(\d+))\s*$/)
	{
		($before, @before) = ($1, $2, $3, $4);
	}
	elsif (m/^\+(\s*(\d+)\s*\|\s*(\d+)\s*\|\s*(\d+))\s*$/)
	{
		($after, @after) = ($1, $2, $3, $4);

		$where{$where}->{better} ||= 0;
		$where{$where}->{worse} ||= 0;

		# Don't count the difference as meaningful unless we're more than 5 better or worse than before
		if ($after[2] > 5 + $before[2])
		{
			$worse++;
			$where{$where}->{worse}++;
		}
		elsif ($after[2] + 5 < $before[2])
		{
			$better++;
			$where{$where}->{better}++;
		}

		if (!exists $gripe{$where} && $where{$where}->{better} == 0 && $where{$where}->{worse} > 50)
		{
			print "TERRIBLE:\n";
			print "\tQUERY: $query\n";
			print "\tBEFORE: $before\n";
			print "\tAFTER:  $after\n\n";
			$gripe{$query} = 1;
		}
	}
}

foreach my $where (sort keys %where)
{
	if ($where{$where}->{better} < $where{$where}->{worse})
	{
		print("better:", $where{$where}->{better}, ", worse:", $where{$where}->{worse}, ":  $where\n");
	}
}
print("\n\nTOTAL:\n");
print("\tbetter: $better\n");
print("\tworse: $worse\n");
#!/usr/bin/perl

use strict;
use warnings;

our ($tblnum, $tblname, $colnum, $colname);

# Generate the where clauses to be used on all tables
our (%wherepattern1, %wherepattern2, %wherepattern3);
my @ops = (">", "<", ">=", "<=", "=", "<>");
my @conj = ("and", "or", "and not", "or not");
for (1..100)
{
	my $op1 = $ops[int(rand(@ops))];
	my $op2 = $ops[int(rand(@ops))];
	my $op3 = $ops[int(rand(@ops))];
	my $conj1 = $conj[int(rand(@conj))];
	my $conj2 = $conj[int(rand(@conj))];

	$wherepattern1{"\%s $op1 \%s"} = 1;
	$wherepattern2{"\%s $op1 \%s $conj1 \%s $op2 \%s"} = 1;
	$wherepattern3{"\%s $op1 \%s $conj1 \%s $op2 \%s $conj2 \%s $op3 \%s"} = 1;
}

sub next_table {
	$tblnum++;
	$tblname = "table_$tblnum";
	$colnum = 0;
	$colname = "column_$colnum";
}

sub next_column {
	$colnum++;
	$colname = "column_$colnum";
}

for my $colcnt (2..10)
{
	next_table();
	print("CREATE TABLE $tblname (\n");
	for (1..$colcnt-1)
	{
		next_column();
		print("\t$colname INTEGER,\n");
	}
	next_column();
	print("\t$colname INTEGER\n);\n");
	print("INSERT INTO $tblname (SELECT ",
		  join(", ", map { "gs-$_" } (1..$colcnt)),
		  " FROM generate_series(1,100) gs);\n");
	print("VACUUM FREEZE $tblname;\n");
	for my $colmax (2..$colcnt)
	{
		print("CREATE STATISTICS ${tblname}_stats_${colmax} ON ",
			  join(", ", map { "column_$_" } (1..$colmax)),
			  " FROM $tblname;\n");
	}
	print("ANALYZE $tblname;\n");
}

# Restart the table sequence
$tblnum = 0;

for my $colcnt (2..10)
{
	next_table();

	for (1..100)
	{
		my $a = sprintf("column_%d", 1+int(rand($colcnt)));
		my $b = sprintf("column_%d", 1+int(rand($colcnt)));
		my $c = sprintf("column_%d", 1+int(rand($colcnt)));

		foreach my $where1 (keys %wherepattern1)
		{
			

Re: Use extended statistics to estimate (Var op Var) clauses

2021-07-21 Thread Dean Rasheed
On Tue, 20 Jul 2021 at 19:28, Tomas Vondra
 wrote:
>
> > The new code in statext_is_compatible_clause_internal() is a little
> > hard to follow because some of the comments aren't right
>
> I ended up doing something slightly different - examine_opclause_args
> now "returns" a list of expressions, instead of explicitly setting two
> parameters. That means we can do a simple foreach() here, which seems
> cleaner. It means we have to extract the expressions from the list in a
> couple places, but that seems acceptable. Do you agree?

Yes, that looks much neater.

> > In mcv_get_match_bitmap(), perhaps do the RESULT_IS_FINAL() checks
> > first in each loop.
>
> This is how master already does that now, and I wonder if it's done in
> this order intentionally. It's not clear to me doing it in the other way
> would be faster?

Ah OK, it just felt more natural to do it the other way round. I
suppose though, that for the first clause, the is-final check isn't
going to catch anything, whereas the is-null checks might. For the
remaining clauses, it will depend on the data as to which way is
faster, but it probably isn't going to make any noticeable difference
either way. So, although it initially seems a bit counter-intuitive,
it's probably better the way it is.

> I guess the last thing is maybe mentioning this in
> the docs, adding an example etc.

Yeah, good idea.

Regards,
Dean




Re: Use extended statistics to estimate (Var op Var) clauses

2021-07-20 Thread Tomas Vondra
Hi,

On 7/5/21 2:46 PM, Dean Rasheed wrote:
> On Sun, 13 Jun 2021 at 21:28, Tomas Vondra
>  wrote:
>>
>> Here is a slightly updated version of the patch
>>
> 
> Hi,
> 
> I have looked at this in some more detail, and it all looks pretty
> good, other than some mostly cosmetic stuff.
> 

Thanks for the review!

> The new code in statext_is_compatible_clause_internal() is a little
> hard to follow because some of the comments aren't right (e.g. when
> checking clause_expr2, it isn't an (Expr op Const) or (Const op Expr)
> as the comment says). Rather than trying to comment on each
> conditional branch, it might be simpler to just have a single
> catch-all comment at the top, and also remove the "return true" in the
> middle, to make it something like:
> 
> /*
>  * Check Vars appearing on either side by recursing, and make a note of
>  * any expressions.
>  */
> if (IsA(clause_expr, Var))
> {
> if (!statext_is_compatible_clause_internal(...))
> return false;
> }
> else
> *exprs = lappend(*exprs, clause_expr);
> 
> if (clause_expr2)
> {
> if (IsA(clause_expr2, Var))
> {
> if (!statext_is_compatible_clause_internal(...))
> return false;
> }
> else
> *exprs = lappend(*exprs, clause_expr2);
> }
> 
> return true;
> 

I ended up doing something slightly different - examine_opclause_args
now "returns" a list of expressions, instead of explicitly setting two
parameters. That means we can do a simple foreach() here, which seems
cleaner. It means we have to extract the expressions from the list in a
couple places, but that seems acceptable. Do you agree?

I also went through the comments and updated those that seemed wrong.

> Is the FIXME comment in examine_opclause_args() necessary? The check
> for a single relation has already been done in
> clause[list]_selectivity_ext(), and I'm not sure what
> examine_opclause_args() would do differently.
> 

Yeah, I came to the same conclusion.

> In mcv_get_match_bitmap(), perhaps do the RESULT_IS_FINAL() checks
> first in each loop.
> 

This is how master already does that now, and I wonder if it's done in
this order intentionally. It's not clear to me doing it in the other way
would be faster?

> Also in mcv_get_match_bitmap(), the 2 "First check whether the
> constant is below the lower boundary ..." comments don't make any
> sense to me. Were those perhaps copied and pasted from somewhere else?
> They should perhaps say "Otherwise, compare the MCVItem with the
> constant" and "Otherwise compare the values from the MCVItem using the
> clause operator", or something like that.
> 

Yeah, that's another bit that comes from current master - the patch just
makes a new copy of the comment. I agree it's bogus, Seems like a
remainder of the original code which did various "smart" things we
removed over time. Will fix.

> But other than such cosmetic things, I think the patch is good, and
> gives some nice estimate improvements.
> 

Thanks, sounds good. I guess the last thing is maybe mentioning this in
the docs, adding an example etc.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
>From 788a909e09b372797c4b7b443e0e89c5d5181ec0 Mon Sep 17 00:00:00 2001
From: Tomas Vondra 
Date: Tue, 20 Jul 2021 20:15:13 +0200
Subject: [PATCH] Handling Expr op Expr clauses in extended stats

---
 src/backend/optimizer/path/clausesel.c|  37 +++-
 src/backend/statistics/extended_stats.c   |  83 ++---
 src/backend/statistics/mcv.c  | 172 +-
 .../statistics/extended_stats_internal.h  |   4 +-
 src/test/regress/expected/stats_ext.out   |  96 ++
 src/test/regress/sql/stats_ext.sql|  26 +++
 6 files changed, 341 insertions(+), 77 deletions(-)

diff --git a/src/backend/optimizer/path/clausesel.c b/src/backend/optimizer/path/clausesel.c
index d263ecf082..6a7e9ceea5 100644
--- a/src/backend/optimizer/path/clausesel.c
+++ b/src/backend/optimizer/path/clausesel.c
@@ -714,6 +714,7 @@ clause_selectivity_ext(PlannerInfo *root,
 	Selectivity s1 = 0.5;		/* default for any unhandled clause type */
 	RestrictInfo *rinfo = NULL;
 	bool		cacheable = false;
+	Node	   *src = clause;
 
 	if (clause == NULL)			/* can this still happen? */
 		return s1;
@@ -871,11 +872,37 @@ clause_selectivity_ext(PlannerInfo *root,
 		}
 		else
 		{
-			/* Estimate selectivity for a restriction clause. */
-			s1 = restriction_selectivity(root, opno,
-		 opclause->args,
-		 opclause->inputcollid,
-		 varRelid);
+			/*
+			 * It might be a single (Expr op Expr) clause, which goes here due
+			 * to the optimization at the beginning of clauselist_selectivity.
+			 * So we try applying extended stats first, and then fall back to
+			 * restriction_selectivity.
+			 */
+			bool	estimated = false;
+
+			if (use_extended_stats)
+			{
+List	   

Re: Use extended statistics to estimate (Var op Var) clauses

2021-07-05 Thread Dean Rasheed
On Sun, 13 Jun 2021 at 21:28, Tomas Vondra
 wrote:
>
> Here is a slightly updated version of the patch
>

Hi,

I have looked at this in some more detail, and it all looks pretty
good, other than some mostly cosmetic stuff.

The new code in statext_is_compatible_clause_internal() is a little
hard to follow because some of the comments aren't right (e.g. when
checking clause_expr2, it isn't an (Expr op Const) or (Const op Expr)
as the comment says). Rather than trying to comment on each
conditional branch, it might be simpler to just have a single
catch-all comment at the top, and also remove the "return true" in the
middle, to make it something like:

/*
 * Check Vars appearing on either side by recursing, and make a note of
 * any expressions.
 */
if (IsA(clause_expr, Var))
{
if (!statext_is_compatible_clause_internal(...))
return false;
}
else
*exprs = lappend(*exprs, clause_expr);

if (clause_expr2)
{
if (IsA(clause_expr2, Var))
{
if (!statext_is_compatible_clause_internal(...))
return false;
}
else
*exprs = lappend(*exprs, clause_expr2);
}

return true;

Is the FIXME comment in examine_opclause_args() necessary? The check
for a single relation has already been done in
clause[list]_selectivity_ext(), and I'm not sure what
examine_opclause_args() would do differently.

In mcv_get_match_bitmap(), perhaps do the RESULT_IS_FINAL() checks
first in each loop.

Also in mcv_get_match_bitmap(), the 2 "First check whether the
constant is below the lower boundary ..." comments don't make any
sense to me. Were those perhaps copied and pasted from somewhere else?
They should perhaps say "Otherwise, compare the MCVItem with the
constant" and "Otherwise compare the values from the MCVItem using the
clause operator", or something like that.

But other than such cosmetic things, I think the patch is good, and
gives some nice estimate improvements.

Regards,
Dean




Re: Use extended statistics to estimate (Var op Var) clauses

2021-06-21 Thread Dean Rasheed
On Sun, 13 Jun 2021 at 21:28, Tomas Vondra
 wrote:
>
> Here is a slightly updated version of the patch
>
> The main issue I ran into
> is the special case clauselist_selectivity, which does
>
>  if (list_length(clauses) == 1)
>  return clause_selectivity_ext(...);
>
> which applies to cases like "WHERE a < b" which can now be handled by
> extended statistics, thanks to this patch. But clause_selectivity_ext
> only used to call restriction_selectivity for these clauses, which does
> not use extended statistics, of course.
>
> I considered either getting rid of the special case, passing everything
> through extended stats, including cases with a single clause. But that
> ends up affecting e.g. OR clauses, so I tweaked clause_selectivity_ext a
> bit, which seems like a better approach.

Yeah, I looked at this a few months ago, while looking at the other
extended stats stuff, and I came to the same conclusion that solving
this issue by tweaking clause_selectivity_ext() is the best approach.

I haven't looked at the patch in much detail yet, but I think the
basic approach looks OK.

There are a few comments that need updating, e.g.:
 - In statext_is_compatible_clause_internal(), before the "if
(is_opclause(clause))" test.
 - The description of the arguments for examine_opclause_args().

I wonder if "expronleftp" for examine_opclause_args() should be
"constonrightp", or some such -- as it stands it's being set to false
for an Expr Op Expr clause, which doesn't seem right because there
*is* an expression on the left.

Regards,
Dean




Re: Use extended statistics to estimate (Var op Var) clauses

2021-06-14 Thread Tomas Vondra
On 6/14/21 5:36 PM, Mark Dilger wrote:
> 
> 
>> On Jun 13, 2021, at 1:28 PM, Tomas Vondra
>>  wrote:
>> 
>> Here is a slightly updated version of the patch
> 
> Thanks for taking this up again!
> 
> Applying the new test cases from your patch, multiple estimates have
> gotten better.  That looks good.  I wrote a few extra test cases and
> saw no change, which is fine.  I was looking for regressions where
> the estimates are now worse than before.  Do you expect there to be
> any such cases?
> 

Not really. These clauses could not be estimated before, we generally
used default estimates for them. So

  WHERE a = b

would use 0.5%, while

  WHERE a < b

would use 33%, and so on. OTOH it depends on the accuracy of the
extended statistics - particularly the MCV list (what fraction of the
data it covers, etc.).

So it's possible the default estimate is very accurate by chance, and
MCV list represents only a tiny fraction of the data. Then the new
estimate could we worse. Consider for example this:

create table t (a int, b int);
insert into t select 100, 100 from generate_series(1,5000) s(i);
insert into t select i, i+1 from generate_series(1,995000) s(i);

This has exactly 0.5% of rows with (a=b). Without extended stats it's
perfect:

explain analyze select * from t where a = b;

Seq Scan on t  (cost=0.00..16925.00 rows=5000 width=8)
   (actual time=0.064..159.928 rows=5000 loops=1)

while with statistics it gets worse:

create statistics s (mcv) on a, b from t;
analyze t;

Seq Scan on t  (cost=0.00..16925.00 rows=9810 width=8)
   (actual time=0.059..160.467 rows=5000 loops=1)

It's not terrible, although we could construct worse examples. But the
same issue applies to other clauses, not just to these new ones. And it
relies on the regular estimation producing better estimate by chance.

regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Use extended statistics to estimate (Var op Var) clauses

2021-06-14 Thread Mark Dilger



> On Jun 13, 2021, at 1:28 PM, Tomas Vondra  
> wrote:
> 
> Here is a slightly updated version of the patch

Thanks for taking this up again!

Applying the new test cases from your patch, multiple estimates have gotten 
better.  That looks good.  I wrote a few extra test cases and saw no change, 
which is fine.  I was looking for regressions where the estimates are now worse 
than before.  Do you expect there to be any such cases?

—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company







Re: Use extended statistics to estimate (Var op Var) clauses

2021-06-13 Thread Zhihong Yu
On Sun, Jun 13, 2021 at 1:29 PM Tomas Vondra 
wrote:

> Hi,
>
> Here is a slightly updated version of the patch - rebased to current
> master and fixing some minor issues to handle expressions (and not just
> the Var nodes as before).
>
> The changes needed to support (Expr op Expr) are mostly mechanical,
> though I'm sure the code needs some cleanup. The main issue I ran into
> is the special case clauselist_selectivity, which does
>
>  if (list_length(clauses) == 1)
>  return clause_selectivity_ext(...);
>
> which applies to cases like "WHERE a < b" which can now be handled by
> extended statistics, thanks to this patch. But clause_selectivity_ext
> only used to call restriction_selectivity for these clauses, which does
> not use extended statistics, of course.
>
> I considered either getting rid of the special case, passing everything
> through extended stats, including cases with a single clause. But that
> ends up affecting e.g. OR clauses, so I tweaked clause_selectivity_ext a
> bit, which seems like a better approach.
>
>
> regards
>
> --
> Tomas Vondra
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
Hi,

-   for (i = 0; i < mcvlist->nitems; i++)
+   if (cst)/* Expr op Const */

It seems the Const op Expr is also covered by this if branch. Hence the
comment should include this case.

Cheers


Re: Use extended statistics to estimate (Var op Var) clauses

2021-06-13 Thread Tomas Vondra

Hi,

Here is a slightly updated version of the patch - rebased to current 
master and fixing some minor issues to handle expressions (and not just 
the Var nodes as before).


The changes needed to support (Expr op Expr) are mostly mechanical, 
though I'm sure the code needs some cleanup. The main issue I ran into 
is the special case clauselist_selectivity, which does


if (list_length(clauses) == 1)
return clause_selectivity_ext(...);

which applies to cases like "WHERE a < b" which can now be handled by 
extended statistics, thanks to this patch. But clause_selectivity_ext 
only used to call restriction_selectivity for these clauses, which does 
not use extended statistics, of course.


I considered either getting rid of the special case, passing everything 
through extended stats, including cases with a single clause. But that 
ends up affecting e.g. OR clauses, so I tweaked clause_selectivity_ext a 
bit, which seems like a better approach.



regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
>From 6dcd4f2fc78be2fca8a4e934fc2e24bcd8340c4c Mon Sep 17 00:00:00 2001
From: Tomas Vondra 
Date: Sun, 13 Jun 2021 21:59:16 +0200
Subject: [PATCH] Handling Expr op Expr clauses in extended stats

---
 src/backend/optimizer/path/clausesel.c|  46 -
 src/backend/statistics/extended_stats.c   |  81 +++--
 src/backend/statistics/mcv.c  | 169 +-
 .../statistics/extended_stats_internal.h  |   2 +-
 src/test/regress/expected/stats_ext.out   |  96 ++
 src/test/regress/sql/stats_ext.sql|  26 +++
 6 files changed, 350 insertions(+), 70 deletions(-)

diff --git a/src/backend/optimizer/path/clausesel.c b/src/backend/optimizer/path/clausesel.c
index d263ecf082..d732a9dc93 100644
--- a/src/backend/optimizer/path/clausesel.c
+++ b/src/backend/optimizer/path/clausesel.c
@@ -714,6 +714,7 @@ clause_selectivity_ext(PlannerInfo *root,
 	Selectivity s1 = 0.5;		/* default for any unhandled clause type */
 	RestrictInfo *rinfo = NULL;
 	bool		cacheable = false;
+	Node	   *src = clause;
 
 	if (clause == NULL)			/* can this still happen? */
 		return s1;
@@ -871,11 +872,46 @@ clause_selectivity_ext(PlannerInfo *root,
 		}
 		else
 		{
-			/* Estimate selectivity for a restriction clause. */
-			s1 = restriction_selectivity(root, opno,
-		 opclause->args,
-		 opclause->inputcollid,
-		 varRelid);
+			/*
+			 * It might be (Expr op Expr), which goes here thanks to the
+			 * optimization at the beginning of clauselist_selectivity.
+			 * So try applying extended stats first, then fall back to
+			 * restriction_selectivity.
+			 *
+			 * XXX Kinda does the same thing as clauselist_selectivity, but
+			 * for a single clause. Maybe we could call that, but need to
+			 * be careful not to cause infinite loop.
+			 */
+			bool	estimated = false;
+
+			if (use_extended_stats)
+			{
+Bitmapset *estimatedclauses = NULL;
+List *clauses = list_make1(src);
+RelOptInfo *rel = find_single_rel_for_clauses(root, clauses);
+
+if (rel && rel->rtekind == RTE_RELATION && rel->statlist != NIL)
+{
+	/*
+	 * Estimate as many clauses as possible using extended statistics.
+	 *
+	 * 'estimatedclauses' is populated with the 0-based list position
+	 * index of clauses estimated here, and that should be ignored below.
+	 */
+	s1 = statext_clauselist_selectivity(root, clauses, varRelid,
+		jointype, sjinfo, rel,
+		, false);
+
+	estimated = (bms_num_members(estimatedclauses) == 1);
+}
+			}
+
+			/* Estimate selectivity for a restriction clause (fallback). */
+			if (!estimated)
+s1 = restriction_selectivity(root, opno,
+			 opclause->args,
+			 opclause->inputcollid,
+			 varRelid);
 		}
 
 		/*
diff --git a/src/backend/statistics/extended_stats.c b/src/backend/statistics/extended_stats.c
index b05e818ba9..59a9a64b09 100644
--- a/src/backend/statistics/extended_stats.c
+++ b/src/backend/statistics/extended_stats.c
@@ -1352,14 +1352,15 @@ statext_is_compatible_clause_internal(PlannerInfo *root, Node *clause,
 	{
 		RangeTblEntry *rte = root->simple_rte_array[relid];
 		OpExpr	   *expr = (OpExpr *) clause;
-		Node	   *clause_expr;
+		Node	   *clause_expr,
+   *clause_expr2;
 
 		/* Only expressions with two arguments are considered compatible. */
 		if (list_length(expr->args) != 2)
 			return false;
 
 		/* Check if the expression has the right shape */
-		if (!examine_opclause_args(expr->args, _expr, NULL, NULL))
+		if (!examine_opclause_args(expr->args, _expr, _expr2, NULL, NULL))
 			return false;
 
 		/*
@@ -1399,13 +1400,44 @@ statext_is_compatible_clause_internal(PlannerInfo *root, Node *clause,
 			!get_func_leakproof(get_opcode(expr->opno)))
 			return false;
 
+		/* clause_expr is always valid */
+		Assert(clause_expr);
+
 		/* Check (Var op Const) or (Const op Var) clauses by 

Re: Use extended statistics to estimate (Var op Var) clauses

2021-03-01 Thread Tomas Vondra




On 3/1/21 8:58 PM, Mark Dilger wrote:




On Nov 12, 2020, at 5:14 PM, Tomas Vondra  wrote:

<0001-Support-estimation-of-clauses-of-the-form-V-20201113.patch>


Your patch no longer applies.  Can we get a new version please?



I do not plan to work on this patch in the 2021-03 commitfest. I'll 
focus on the other patch about extended statistics on expressions.


regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Use extended statistics to estimate (Var op Var) clauses

2021-03-01 Thread Mark Dilger



> On Nov 12, 2020, at 5:14 PM, Tomas Vondra  
> wrote:
> 
> <0001-Support-estimation-of-clauses-of-the-form-V-20201113.patch>

Your patch no longer applies.  Can we get a new version please?

—
Mark Dilger
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company







Use extended statistics to estimate (Var op Var) clauses

2020-11-12 Thread Tomas Vondra
Hi,

Attached is a patch to allow estimation of (Var op Var) clauses using
extended statistics. Currently we only use extended stats to estimate
(Var op Const) clauses, which is sufficient for most cases, but it's not
very hard to support this second type of clauses.

This is not an entirely new patch - I've originally included it in the
patch series in [1] but it's probably better to discuss it separately,
so that it does not get buried in that discussion.

[1]
https://www.postgresql.org/message-id/flat/20200113230008.g67iyk4cs3xbnjju@development

To illustrate the purpose of this patch, consider this:

db=# create table t (a int, b int);
CREATE TABLE

db=# insert into t select mod(i,10), mod(i,10)+1
   from generate_series(1,10) s(i);
INSERT 0 10

db=# analyze t;
ANALYZE

db=# explain select * from t where a < b;
   QUERY PLAN

 Seq Scan on t  (cost=0.00..1693.00 rows=3 width=8)
   Filter: (a < b)
(2 rows)

db=# explain select * from t where a > b;
   QUERY PLAN

 Seq Scan on t  (cost=0.00..1693.00 rows=3 width=8)
   Filter: (a > b)
(2 rows)

db=# create statistics s (mcv) on a,b from t;
CREATE STATISTICS

db=# analyze t;
ANALYZE

db=# explain select * from t where a < b;
   QUERY PLAN
-
 Seq Scan on t  (cost=0.00..1693.00 rows=10 width=8)
   Filter: (a < b)
(2 rows)

db=# explain select * from t where a > b;
 QUERY PLAN

 Seq Scan on t  (cost=0.00..1693.00 rows=1 width=8)
   Filter: (a > b)
(2 rows)


I'm not entirely convinced this patch (on it's own) is very useful, for
a couple of reasons:

(a) Clauses of this form are not particularly common, at least compared
to the Var op Const clauses. (I don't recall slow-query reports from any
of our mailing lists that might be attributed to such clauses.)

(b) For known cases of such queries (e.g. several TPC-H queries do use
clauses like "l_commitdate < l_receiptdate" etc.) this is somewhat
hindered by extended statistics only supporting MCV lists, which may not
work particularly well for high-cardinality columns like dates etc.

But despite that it seems like a useful feature / building block, and
those limitations may be addressed in some other way (e.g. we may add
multi-dimensional histograms to address the second one).


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
>From 439d39db128b9cbc06063a862283d663510d0fcc Mon Sep 17 00:00:00 2001
From: Tomas Vondra 
Date: Fri, 13 Nov 2020 01:46:50 +0100
Subject: [PATCH] Support estimation of clauses of the form Var op Var

Until now, extended statistics were used only to estimate clauses of the
form (Var op Const). This patch extends the estimation to also handle
clauses (Var op Var).
---
 src/backend/statistics/extended_stats.c   | 67 +
 src/backend/statistics/mcv.c  | 71 +-
 .../statistics/extended_stats_internal.h  |  2 +-
 src/test/regress/expected/stats_ext.out   | 96 +++
 src/test/regress/sql/stats_ext.sql| 26 +
 5 files changed, 243 insertions(+), 19 deletions(-)

diff --git a/src/backend/statistics/extended_stats.c b/src/backend/statistics/extended_stats.c
index 36326927c6..1a6e7a3fc0 100644
--- a/src/backend/statistics/extended_stats.c
+++ b/src/backend/statistics/extended_stats.c
@@ -987,14 +987,18 @@ statext_is_compatible_clause_internal(PlannerInfo *root, Node *clause,
 	{
 		RangeTblEntry *rte = root->simple_rte_array[relid];
 		OpExpr	   *expr = (OpExpr *) clause;
-		Var		   *var;
+		Var		   *var,
+   *var2;
 
 		/* Only expressions with two arguments are considered compatible. */
 		if (list_length(expr->args) != 2)
 			return false;
 
-		/* Check if the expression has the right shape (one Var, one Const) */
-		if (!examine_clause_args(expr->args, , NULL, NULL))
+		/*
+		 * Check if the expression has the right shape (one Var and one Const,
+		 * or two Vars).
+		 */
+		if (!examine_clause_args(expr->args, , , NULL, NULL))
 			return false;
 
 		/*
@@ -1034,7 +1038,20 @@ statext_is_compatible_clause_internal(PlannerInfo *root, Node *clause,
 			!get_func_leakproof(get_opcode(expr->opno)))
 			return false;
 
-		return statext_is_compatible_clause_internal(root, (Node *) var,
+		/*
+		 * Check compatibility of the first Var - we get this one for both
+		 * types of supported expressions (Var op Const) and (Var op Var).
+		 */
+		if (!statext_is_compatible_clause_internal(root, (Node *) var,
+   relid, attnums))
+			return false;
+
+		/* For (Var op Const) we don't get the second Var, and