I vacuum frequently, particular after large updates or deletes.

Two other potential optimizations of inserts (you may already be doing
this):
- use bulk insert
- encapsulate the inserts within a transaction

One other note, if you have a primary key whose value is continually
increasing your pk index can become imbalanced and therefore
inefficient.  You may be able to get around this by using INTEGER
PRIMARY KEY (or INTEGER PRIMARY KEY AUTOINCREMENT).  As a side benefit
this kind of column queries very efficiently since the column value is
row's address (the ROWID).

-Jeff

-----Original Message-----
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Wes Freeman
Sent: Tuesday, June 16, 2009 4:03 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Database inserts gradually slowing down

VACUUM cleaned up the file in my current test, after 1200 iterations,
making it run at ~4.6seconds again, rather than ~5.1. It seemed to get
it almost back to the performance of a clean file.

Didn't know about the vacuum command--Cool. By the way, the vacuum
operation takes ~1.6 seconds for my test database. Seems like a
reasonable thing to run before a backup or something, if your
application would be prone to this sort of fragmentation.

Wes

On Tue, Jun 16, 2009 at 1:44 PM, Hoover, Jeffrey<jhoo...@jcvi.org>
wrote:
> Wouldn't a period VACUUMing of the database alleviate fragmentation?
>
> - Jeff
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to