2011/11/9 Nico Williams <n...@cryptonector.com>

>
> I don't get it.  You're reading practically the whole file in a random
> manner, which is painfully slow, so why can't you read the file in one
> fell swoop (i.e., sequential reads)??
>

I'm only reading the whole file when the number of additional inserts is
high enough to cause the whole index to be read from disk. But if I always
pre-cache the database, it will downgrade performance for cases when only
10 inserts need to be done. And I'd like to avoid to have some fuzzy logic
that tries to predicts which of the two methods is going to be faster.

Besides, pre-caching the file sounds easier than it is to accomplish,
because all methods suggested on this list did not work on Windows (for
example copying the file to null). Windows and the harddrive have their own
logic to decide which data to cache, and I haven't found a simple way to
force a certain file into cache.

Or, if FTS really works better, then use that.


I will, but I'm trying to understand the issue that i'm facing, not just
workaround it. It seems that FTS doesn't need to read the whole index from
disk, so I'm trying to pinpoint the difference. My best guess is that it
creates a fresh b-tree for the additional inserts, causing the boost in
performance.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to