Hello,
Thanks for your attention,

> Although speaking generally such method could be used in some situations, I
> don't think it's good to allow to use it even with a "i know what I'm
> doing"
> pragma. Any structured file (sqlite is an example) have internal
> dependencies. One of the reasons to block is to write different parts of
> structured data together without intervention from other parties in order
> to
> keep the data integral. Imagine writing cache that kept changes for your
> writer and finally it needed to flush the data and at the same time your
> "anytime" reader started to perform some query in the middle of this
> multiply pages writing process. I can't predict whether the reader will end
> with some valid result or it will return with "database malformed" error.
>
> Instead consider changing your own logic. You wrote "without* any
> possibility to be blocked". I suppose you already have a perfect writer
> that
> fits your needs, but if you post some info about the nature of your writer
> and reader (records per second and something like this), it would help to
> be
> more specific with answers.
>
> Max


The writer application must be failsafe, as much as possible (acoustic
emission recording devices); I simply can not afford that a reader makes a
select and because of a programming error the acquisition be blocked. I had
this just by opening sqliteman.

The recording rate is variable; using a test structure (~14 fields in 1
table, all integers 32/64bit) I was able to achieve ~90000 records /sec with
sqlite API, which was not really good enough (there are is no jurnal; I
don't need any rollback / consistency check).
Then I was able to make a virtual table wrapper and insert/select and
reached ~170000 rec/sec, which is already a big step forward. I think is not
planned, but I would like to have some bulk insert API (not sql) to speed up
things; hdf5 with packet table API reached ~7-8 00 000 rec/sec and is
essentially IO bound. I do not expect this kind of performance from sqlite
soon... Maybe cache settings might also help, suggestions are welcome.
For reading it is much better; here I also used an workaround, a fake
aggregated function ("store(...)") is working much faster (and simpler to
implement than a virtual table).
So the reader can read faster anyway, it just needs to "follow" the data;
small delays are normal and acceptable.


Back to the technical problem; roughly described, I would expect that a
table is expanded first with new data then (on transaction end, I used
blocks of ~1000 records) update some metadata information about table pages
(which pages are used for the data). I image that it could be made somehow
safe (I'm sorry but I have no idea about the actual implementation): write
data (old data remains valid), invalidate page list (if necessary, one
single value which would be atomic on most systems if properly aligned),
write the new page list, then mark it as valid again   (nothing affects
current behavior when using existing locking anyway).
The reader could read without any problem "old" data; it would only need to
detect a "dirty" read if the page list is changed to be incompatible (if new
pages are appended only, the page list might always be valid). In this case
the reader would work in a short loop (similar to the current behavior when
locking) trying to get a "clean" page list.

A trivial example:
page count = 5, page list = 1,2,3,4,5
after writing 2 new pages
page list append 6,7 (does not change the old list entries) then set page
count = 7 (atomic)
The reader reads either 5 or 7; in any case, the page lists are valid.
If necessary, a (transaction) counter might be used to detect "dirty" reads
(when really incompatible).

So the question are: how is expanding working? would it fit such model?
Sure the indexes might create problems... I only use the build-in rowid
during writing.

Thanks again,
Gabriel
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to