Hello, Rene

> Hi there,
> 
> I have to store and retrieve up to 2000 parameters.
> The parameters can have real and integer values.
> The max. change rate is 100ms and the max. duration is up to some hours.
> 
> The simple solution would be to use plain binary files. It's fast but not 
> flexible.
> 
> So I came to sqlite. Before starting I would like to ask if someone could 
> give 
> me some advice which way to go. Here are my current versions.
> 
> Version 1: One big table with time stamp and one column for each parameter
> - I can have up to SQLITE_MAX_COLUMNS=2000 but how can I insert all values 
> with only SQLITE_MAX_VARIABLE_NUMBER=999?
> - What about space consumption if NULL values are given?

You can always recompile SQLite having desired parameters.
Or you can make you own implementation of virtual table.
In my opinion, having virtual table would be the most efficient 
way to inflate data into SQLite.
Example:

INSERT INTO my_table(timestamp, param_1, ..., param_n) 
SELECT NOW(), _1, _2, ..., _n FROM my_virtual_table;

Though implementing virtual table correctly will take you some time
that you would spend elswhere.
The rule of thumb: do not bother with virtual table if CPU is not a bottleneck.

Use this approach if you always need to query all of the parameters.

> 
> Version 2: One table for each parameter with time stamp and value
> - Does this scale for up to 2000 parameters?
> - Can sqlite handle so much tables?
> 

First of all, it depends on your usage. If at any given point of
time you may need only one or few parameters and don't care about
the others, you should group then in separate tables or even make a personal
table for each of them. In this case, querying individual parameters or 
groups of parameters will be faster, since SQLite will not pick up 
extra data from the disk. But you must realise that each table will have 
its own copy of timestamp and potentially a ROWID (for non-WITHOUT ROWID tables)
that will eat up extra space. Also inserts/updates will be much more expensive
because data in the database will be fragmented.
So for example if to insert one row into 2000-column table if would take to
write a couple of pages to disk, inserting into 2000 tables would make 
at least 2000 pages to be written to disk.

Consider the fact that in this case it would be the nightmare to 
make sure every timestamp value is associated with a value of every parameter.
(If you have to provide this guarantee)

Yes, SQLite can handle this many tables.

> Version 3: One table with time stamp, parameter id and parameter value
> - Is it working when all values change?
> - Is retrieving values for one parameter fast?

For me, this looks as the most efficient way of implementing what you want.

Retrieving speed will depend on indexes you have.
If you when you are querying parameters you always specific timestamp
or time range, you can get away with compound index (timestamp, parameter_id).
Then selects like:

SELECT value FROM table WHERE timestamp = X;
SELECT value FROM table WHERE timestamp = X AND ...;
SELECT value FROM table WHERE timestamp > X;
SELECT value FROM table WHERE timestamp < X;
SELECT value FROM table WHERE timestamp > X; AND timestamp < y;
SELECT value FROM table WHERE timestamp IN (X1, ..., Xn);
....

will always be efficient.

Do not create separate index like this, make it PROMARY KEY.
And experement how using WITHOUT ROWID will perform.


> 
> I will write and read the data on the same time. But writing should have 
> priority. I could live with a potential lost of data because of a power 
> failure. So I will use try the settings of the options "synchronous" and 
> "journal_mode". Anything else I should consider?
> 

You may also play around with temp_store and cache_size pragmas.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to