I will answer your question with a question: Does SQLite have builtin fancy
time series models it can launch directly against full table scans? Doubtful.
Whatever it does have will be commonplace and you are unlikely to beat the
market with it { you aren't so likely in the first place..but, hey, _someone_
has to keep it efficient :-) }. And if it's not built-in to the table scan then
there will at least be some extraction pipeline mumbo-jumbo in the way - to
what end?
This data is read-only or at least append-only. So, 99% of the complexity of a
full DB is just not needed. I don't know SQLite well enough to know if it can
memory mapped IO, but I know such do exist as well. But you wind up still
wanting to just cast a record reference to a native pointer type. So, if you
don't need transactions and indexing and all that, and you wind up doing this
simple thing inside the DB instead of a file anyway, then you aren't getting
any value out of your dependency. IMO, dead weight dependencies are usually bad.
More briefly, custom-even-trade-secret analytics or ML models are usually not
so well aided by SQL engines. Meanwhile just rolling your own flat file is just
as trivial as I showed above..more so with practice even for little bear
brains. :-)
So, you can always do whatever you want, but as I mentioned, people get fixed
in their IO ways..not quite human nature, but SW dev nature. :-) And maybe I'm
even guilty of that pushing this stuff. You may find answers (or more
questions/complaints) in the [NIO
FAQ](https://github.com/c-blake/nio/blob/main/FAQ.md) which is really more like
an " _every_ question someone's asked me about this in 20 years".
I make no claim that it's perfect for everything, but this kind of IO is
especially well suited for this kind of back testing use case where you might
do millions of passes over some of the data.