I asked a similar question before, but received no response, so maybe it's
a very stupid question, but if so, feel free to say so.

I create a database, create one table with a TEXT column, insert 1 million
rows in 10 secs, create an index in 10 secs, VACUUM the database, and close
the database.

Now if I re-open the database, I can add an additional 10.000 rows very
fast (<1 sec). But if I reboot the (Windows) PC, and insert an additional
10.000 rows, it takes at least 30 secs, which seems very slow, if I can add
the first 1 million in under 10 seconds.

It probably has to do with inefficient disk seeks, but even if SQLite has
to read the whole database in memory to do the inserts, 30 secs is still
slow for a 150mb file (the database).

So is there anyone who can explain these slow inserts, or has ideas to
optimize for a scenario like the above? Because currently this renders my
application almost unusable.

The only optimization I can think of would be dropping the index, inserting
the rows, and re-creating the index. But it would require some smart logic
when to do it (because for just 3 inserts it would make things slower
instead of faster), and it feels like more of a work-around than a solution.

Please help me out, thanks in advance!
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to