On 13 Dec 2016, at 1:23am, Vince Scafaria <[email protected]> wrote:

> Our application does a lot of SQLite read/write in a background process. On 
> some environments this hammers the disk I/O (e.g. Task Manager -> Disk). Do 
> you have suggestions on how we might minimize this?

Batch them together in a transaction:

BEGIN;
Lots of SQL changes;
COMMIT;

If you don’t explicitly make a transaction SQLite automatically makes one per 
command.  Since most of the changes made to the file are made once per 
transaction, declaring an explicit transaction for each – for example – 
thousand changes means you end up with far less disk access.

Does data integrity matter if the application crashes ?  Or is the file 
important only to the current session ?  If data integrity isn’t important you 
could try some of the PRAGMAs which speed things up.

> Our tables all have a primary key on Id. By including Id in the statement, it 
> does a REPLACE. As I understand it, replace includes a behind-the-scenes 
> DELETE which triggers a re-indexing. Maybe that is part of the problem, but I 
> don't see a workaround. Ideas?

Don’t worry about that.  It’s already being done in the most efficient way 
practicql.

Simon.
_______________________________________________
sqlite-users mailing list
[email protected]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to