Or you can do your immediate writing to a database in memory, and have anotherprocess dump memory to disk in the background. Depending on how recent youneed reading you can read the one in memory or the one on disk.
It seems I have reached the CPU boundary (>90% one 1 core), not waiting for the disk anymore... Plus that I'm not using fsync ("pragma synchronous"), so the disk cache is in effect anyway.
If I need more through-output, I might go multi-threaded write (if possible).
However it would be interesting to know what's really doing; I have an "append" only usage. I think most of the time is spent in updating/maintaining the primary key btree, which is a simple "INTEGER PRIMARY KEY" with null on inserts - so the values are auto-generated.
I am using a virtual table with a block of values (all fields except the rowid) and a "insert into <target> select * from <virtualtable>".
Is there a possibility to optimize this simple case (because the number of records is known, so all new rowids are virtually known)?
Gabriel _______________________________________________ sqlite-users mailing list [email protected] http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

