On Tue, Jul 16, 2019 at 4:44 AM Adrian Ho <ml+sql...@03s.net> wrote:

> On 15/7/19 11:25 PM, Nelson, Erik - 2 wrote:
> > Dominique Devienne wrote on Monday, July 15, 2019 2:41 AM
> >> That's when you reach for virtual tables (and their "virtual indices").
> >> I.e. you keep your data in native data-structures (Boost.MultiIndex in
> my case),
> >> and just provide a SQLite view of it. Much faster than "pure-in-Memory"
> with
> >> SQLite-managed pager-backed B-tree tables. Steep learning curve,
> especially
> >> for the indexing part, but the results are well worth it IMHO.
> >> Which can be freely mixed with "real" tables having "real" indexes (in
> the in-memory DB).
>


> > That sounds really intriguing- does it significantly speed up queries
> coming in through the SQLite engine?

> Or the speed bump is only if accessing from the C++-native side?
>

It's been years since benchmarking this, but you can get 5x faster in some
use cases
(compared to in-memory DB, i.e. no disk IO at all, assuming no swapping of
course).

See, data in SQLite is "packed" into pages, in a compact format, that must
be decoded/encoded
on all reads/writes. Pages must be copied around. BTrees must be traversed,
which means decoding
pages again to traverse them. While your vtables reference C++ structs,
with fields of "native" values.
If your "vtables" are represented by hashes C++ collections, then a
by-primary-key index access is
super fast, and accessing the fields/columns of the selected row needs no
unpacking either.

Even full scans are typically faster too.

The whole point of virtual tables is to make something outside an actual
> SQLite DB accessible via normal SQL queries within SQLite; you'd still
> use the normal access methods (pointer deferencing, <map> lookups, etc.)
> from the native side.
>

Right. The vtables just provide another mechanism to access the same data
already part
of the application and its data model, but unlike native code, it's
flexible and "runtime" configurable.

You can still always access the native C++ API accessing the same
containers and data structures.
But as Richard mentions, the amount of C++ code you replace by crafting a
non-trivial query is sometimes
quite large, and it's often less buggy to write the SQL than the C++.
SQLite is fast enough that unless
it's performance critical code, performance is not an issue at all. Just
look at DRH's own Fossil, and the
super complex queries it runs in mere milliseconds or less.

Once you have SQLite embedded into your app, it tends to sip through, to
all settings/preferences
readily available in (typically "real") tables of the in-memory DB. Easily
queryable and accessible.
And even runtime editable if you allow it (see also "authorizer" in the
doc).

And you can also use it as your data format, of course, as often mentioned.
Even if you "serialize"
your whole memory state on each save, it's still efficient. And allows a
path forward to one day
enable incremental updates of the state as it changes.

IMHO, most desktop apps would greatly benefit from in-memory SQLite
combined with vtables
and custom functions. It's fantastic for debugging and troubleshooting.
It's flexible and dynamic,
allowing to change behavior or appearance of your app and UI at runtime if
you design for it.
SQLite is just a wonderful tool. It's not perfect of course, but it's so
well designed, so robust and
well tested, and so fast when used judiciously, not using it is not an
option for me at least :). --DD
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to