On 29 Oct 2015, at 7:49pm, Jason H <jhihn at gmx.com> wrote:

> If I were to try to work around all this excess reading, how good or bad 
> would it be to take the following approach:
> Define a set of smaller tables (column count and size) with a common key, 
> then create a view for the specific query, then query the view.
> 
> create table k (id integer primary key, ... other always used columns...);
> create table c1 (k integer, data text); -- seldomly used large column
> create table c2 (k integer, data text); -- seldomly used large column
> create table c3 (k integer, data text); -- seldomly used large column
> ...
> create table c128 (k integer, data text); -- seldomly used large column

Handling large numbers of tables is slow.  Handling large numbers of rows is 
fast.  For cases where you have "seldomly used large column" use sparse storage:

CREATE TABLE sparseColumns (id INTEGER, k INTEGER, value TEXT, PRIMARY KEY 
(id,k));

Generally speaking SQLite is ridiculously good at this stuff and trying to work 
around it only makes things worse.  It's in its third version and even version 
3 has been worked on, including major re-writes, for ten years.  It doesn't do 
slow complicated things (row-level locking, character-sets).  By all means tell 
us when you get slow results, but if you're trying to predict slow results just 
from reading the documentation you'll probably not getting useful data.

Simon.

Reply via email to