On 5 February 2012 19:11, Roger Binns <rog...@rogerbinns.com> wrote:

> The values for a row are stored sequentially.  Changing the size of a
> value will at least require rewriting the row.

Sure, but a row can be re-written by copying each byte to memory and
back out again, or by copying the whole row into memory and then back
out again. The former case uses O(1) space, the latter O(n). Reading
the file format doesn't help me differentiate between these cases.

WRT premature optimisation, I just need to keep the high memory
watermark as low as I can. Consider a document X split into pages X1,
X2... Xn, where n can be arbitrarily large. I don't have enough memory
to contain all the text in X, so I need to work out whether I need to
make my rows Xi, which will make the kind of query I want to do much
less efficient, or whether I can store the whole of X in a single row.
The profiler might be able to answer this question, but so might
someone on this mailing list. I don't need the profiler to know which
question to *ask*.

Also, if UPDATE fts4table SET content = content || 'new content'
effectively removes and re-inserts entries for every word in the
existing content, this is prima facie unacceptable. Just because it's
a bad idea to prematurely optimise doesn't mean it's a good idea to
code as if you have no idea of algorithmic complexity.

If there is a more suitable mailing list on which to ask my question,
please let me know.

Thanks,
Hamish
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to