O'Toole, Eamonn wrote:
> We are seeing a performance anomaly on updates to existing object
> records in the SQLite DB.  If the container DB is sufficiently large
> (about 10 million objects, 3.3GB) then the time to update records at
> the beginning of the database by order of entry is anomalously high.

Does VACUUM change anything?

> The time is particularly bad for the first approx. 100K records, is
> somewhat better for the next 900K records, and settles down to
> a consistent average from approx 1 million records on.  If this
> consistent average time is around 7 seconds for 10,000 updates, then
> we see times of about 170 seconds for 10,000 updates on the first 100K
> records.  We don't see this anomalously high update time if we start
> the updates after the first 1 million records.
>
> Note that table updates are performed by first DELETEing the entry and
> then INSERTing the changed entry.

Is it the DELETE or the INSERT that is slow, or both?

Are you addressing the records by ROWID or by name?

> Note that Chunky_Ks (IRC) has suggested that B-tree rebalancing could
> be the fundamental issue.

How big are the records, compared to the page size?
(And what is the output of sqlite3_analyzer?)

What kind of updates are these?  If you're changing many values in the
'deleted' column in the same way, it might be possible that the
ix_object_deleted_name index needs to be rebalanced.

> CREATE TRIGGER object_update BEFORE UPDATE ON object
>             BEGIN
>                 SELECT RAISE(FAIL, 'UPDATE not allowed; DELETE and INSERT');
>             END;

Couldn't you implement this by running the code of both DELETE and
INSERT triggers for the old and new records?  (This should avoid the
need to rebalance the object table.)


Regards,
Clemens
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to