On Wed, Jul 24, 2019 at 10:45 AM Hick Gunter <h...@scigames.at> wrote:

> The speed of a virtual table depends on the backing store and software
> used to implement it.
>

[DD] Sure. virtual-tables can also access the disk and do expensive things.
[DD] I did say "example given" for my fast-pure-memory-no-decoding case.


> We have virtual tables that reference CTree files as well as virtual
> tables that reference memory sections here.

The advantage is that the VT implementation can adjust it's answers in the
> xBestIndex function.


[DD] I'm not sure I see your point. My point (and Justin's if I understand
him right), is that the relative
[DD] costs from tables vs virtual-tables is hard to figure out, which could
skew results of the planner
[DD] toward sub-optimal plans.

[DD] Most of my queries involve only my own virtual tables, so I use
arbitrary relative costs, like
[DD] 1 if returning a single row via a (virtual) unique index or PK, 2 if
returning a range of rows, and 4 for a full table scan.
[DD] But these "relative for my vtable costs" are probably completely wrong
when mixed with "real" tables,
[DD] disk-based or in-memory. There must be some meaningful correlations
between all costs for an optimal plan.
[DD] Or am I missing something? --DD
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to