Hello,

My database contains a 'time' field where I am storing the unix epoch
timestamp (generating programmatically using time() function in a C/Linux
environment). Hence the database actually contains records sorted by time,
though there may be multiple records in one time instance. I also have an
index on this time field.

I will have to keep rolling my database such that it contains only 5 days of
data. I also have a restriction on the database size. Therefore I have a
function that gets invoked periodically and decides if any data needs to be
deleted. If yes, it calculates the time from which data will be deleted,
i.e. all records prior to this time can be deleted.

However, the delete command "DELETE FROM table WHERE time < XXX" is very
slow (XXX is in time_t format). VACUUM is even slower. (I need to VACUUM
since I have a restriction on the database size.) My database has 1 million
records and is more than 100 MB in size.

What would be the most optimal way of making the deletion?

I suspect that SQLite might be scanning all rows, though since my records
are ordered on time, once it has found the first record not satisfying the
above condition, it can stop scanning.

Also, if I need to make manual queries on time, I am using something like
"SELECT * FROM table WHERE datetime(time, 'unixepoch', 'localtime') <
'2008:05:04 13:34:45'. Is this the most optimal way of querying (and
storing) the time?

Thanks,
Bikash
-- 
View this message in context: 
http://www.nabble.com/Delete-based-on-timestamp-very-slow-tp20356268p20356268.html
Sent from the SQLite mailing list archive at Nabble.com.

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to