Re: [sqlite] Unexpected optimization

2018-03-22 Thread Richard Hipp
Max,

Since you appear to be writing your own virtual tables, you could
probably benefit from becoming more familiar with the internal
workings of SQLite, and especially the ".wheretrace" and
".selecttrace" commands of the CLI.  To enable those commands, build a
new copy of the CLI that includes -DSQLITE_ENABLE_SELECTTRACE and
-DSQLITE_ENABLE_WHERETRACE.

The command ".selecttrace 0x" will print out the parse tree of
your SQL statement at various points as it is transformed by the query
optimizer.  By viewing this parse tree, you might be better able to
understand the transformations that are taking place, which might give
additional insights into what it going astray for you.

The latest pre-release snapshot for 3.23.0 contains a couple of new
optimizations related to LEFT JOIN.  Please also try your code with
the pre-release snapshot to see if it helps or hurts or makes no
difference.  And please report back what you find, regardless of your
findings.

The ".wheretrace 0xfff" command prints out the steps used by the query
planner as it scores various execution strategies looking for the
fastest way to run your query.  It should clearly show the costs
returned by your xBestIndex implementation, how decisions are made
based on those costs, and help you to see how changing those costs
might result in better plans.

The outputs from .selecttrace and .wheretrace are undocumented.  They
are subject to change.  And they do change in incompatible ways from
time to time.  You'll need to look at the source code to fully
understand what the outputs mean.

Note that the cost numbers printed by .wheretrace are logarithmic.
See the description of the LogEst numbers at
https://www.sqlite.org/src/artifact?ln=755-778=7e9deb145c110289
for additional information. There is a simple command-line program at
https://www.sqlite.org/src/artifact/11346aa019e2e77a that you can
compile and use to convert values between LogEst and traditional
base-10 decimal numbers.

There is older debugging documentation at
https://www.sqlite.org/debugging.html that might give additional
hints.

As you work through this problem, please provide feedback so that we
can improve the documentation for the next person who faces similar
issues.

On 3/22/18, Max Vlasov  wrote:
> Hi,
>
> I noticed an unexpected optimization at the sqlite side.
> Currently I can not reproduce this with some arbitrary test data (probably
> I will eventually). Anyway the logic behind this (pseudo-code query)
>
> Select  , (Select count(*) from LookUpTable where
> LookUpTable.Value=TableValue) as StatCount from
> (
>   ... Select TableValue, ... left join  ... left join
>   where 
> )
>where StatCount  = ..
>
> The aggregate lookup (Select count()) is relatively expensive to perform
> and involves a virtual table on my side (LookUpTable). So the goal of
>  is also to narrow the output of the data for this lookup.
> Most of the time (including my synthetic tests) the filter indeed works the
> expected way (Filtering with  then performing the aggregate
> only for the suitable), but for some of queries where there are several
> joins sqlite starts to perform the lookup before applying 
> so I get my expensive calculations used for all rows of the inner joined
> table and then filtering with . I double checked this since
> the LookUpTable is my virtual table so I can set a breakpoint and inspect
> the passed value. Ironically, when I remove the outer condition ( where
> StatCount ..  ) from the query in question, it starts to work much faster.
>
> I suspect this might be related to how I respond to the constraint cost
> requests from sqlite. For this virtual table the possible results might be
> 1 or a very big value. I see that the value 1 is indeed visited for this
> query and probably sqlite might assume some absolute minimum cost for this
> look-up. But when I change it to a bigger value (still lower than "a very
> big value" also used), the query plan will not change.
>
> Here are summary of Explain Query Plan (rea is my virtual table, there are
> 4 joins in this query, sqlite 3.21.0).
>
> The Query with Outer condition "where StatCount  = .."
> SCAN TABLE
> SEARCH TABLE (2 times)
> EXECUTE CORRELATED SCALAR SUBQUERY 1
> SCAN TABLE Rea VIRTUAL TABLE INDEX 2:
> SEARCH TABLE (2 times)
> EXECUTE CORRELATED SCALAR SUBQUERY 2
> SCAN TABLE Rea VIRTUAL TABLE INDEX 2:
>
> The same query when I just removed the outer "where StatCount  = .."
> SCAN TABLE...
> SEARCH TABLE (4 times)
> EXECUTE CORRELATED SCALAR SUBQUERY 1
> SCAN TABLE Rea VIRTUAL TABLE INDEX 2:
>
>
> Can I manually affect the plan for this query or probably by further
> tweaking the virtual table costs?
>
> Thanks
>
>
>
> Thanks
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org

[sqlite] Unexpected optimization

2018-03-22 Thread Max Vlasov
Hi,

I noticed an unexpected optimization at the sqlite side.
Currently I can not reproduce this with some arbitrary test data (probably
I will eventually). Anyway the logic behind this (pseudo-code query)

Select  , (Select count(*) from LookUpTable where
LookUpTable.Value=TableValue) as StatCount from
(
  ... Select TableValue, ... left join  ... left join
  where 
)
   where StatCount  = ..

The aggregate lookup (Select count()) is relatively expensive to perform
and involves a virtual table on my side (LookUpTable). So the goal of
 is also to narrow the output of the data for this lookup.
Most of the time (including my synthetic tests) the filter indeed works the
expected way (Filtering with  then performing the aggregate
only for the suitable), but for some of queries where there are several
joins sqlite starts to perform the lookup before applying 
so I get my expensive calculations used for all rows of the inner joined
table and then filtering with . I double checked this since
the LookUpTable is my virtual table so I can set a breakpoint and inspect
the passed value. Ironically, when I remove the outer condition ( where
StatCount ..  ) from the query in question, it starts to work much faster.

I suspect this might be related to how I respond to the constraint cost
requests from sqlite. For this virtual table the possible results might be
1 or a very big value. I see that the value 1 is indeed visited for this
query and probably sqlite might assume some absolute minimum cost for this
look-up. But when I change it to a bigger value (still lower than "a very
big value" also used), the query plan will not change.

Here are summary of Explain Query Plan (rea is my virtual table, there are
4 joins in this query, sqlite 3.21.0).

The Query with Outer condition "where StatCount  = .."
SCAN TABLE
SEARCH TABLE (2 times)
EXECUTE CORRELATED SCALAR SUBQUERY 1
SCAN TABLE Rea VIRTUAL TABLE INDEX 2:
SEARCH TABLE (2 times)
EXECUTE CORRELATED SCALAR SUBQUERY 2
SCAN TABLE Rea VIRTUAL TABLE INDEX 2:

The same query when I just removed the outer "where StatCount  = .."
SCAN TABLE...
SEARCH TABLE (4 times)
EXECUTE CORRELATED SCALAR SUBQUERY 1
SCAN TABLE Rea VIRTUAL TABLE INDEX 2:


Can I manually affect the plan for this query or probably by further
tweaking the virtual table costs?

Thanks



Thanks
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users