First, thanks for all your answers. I would like to come back to my question (see first post).
All events come sequentially (because it's a time line). I can have more than one event during the same time but I use 1 different bit to identify each event types. I think it's important to use this sequential constraint to do the delete request. But SQLite3 use a B-Tree algorithm... When I delete some rows it's always for oldest events (before a specific time = 'Ts'). I mean I delete all rows before 'Ts' (like a fifo...) I don't want to check all my event table but only the oldest events (first rows). When I find a time greater than 'Ts' I can stop the delete operation. Do you think It's better to use one more table for optimization? For example, creating a "summarised" table which could, in 1 row, summarise all the events (detailed in the general table) that have happened in a time lapse (= time between two rows). This "summarised" table could be created with the following request : CREATE TABLE bloc_events( bloc_id INTEGER NOT NULL PRIMARY KEY, bloc_start_time REAL, events_resume BLOB ); And now my event table looks like this: CREATE TABLE events( time REAL NOT NULL PRIMARY KEY, event BLOB, bloc_id INTEGER ); 'events_resume' is a logical 'OR' result of all event types detected in the correspondent event groups in the events table. 'bloc_start_time' is the start time of a new group of events. The difference between two successive bloc_events rows is the group precision (maybe 10 minutes) To know the oldest event (bloc_id) to delete I check bloc_events table first. Then I delete row on events table with the same bloc_id found on bloc_events table. How to create a FOREIGN KEY with SQLite3? Thanks in advance for your answers. Regards, Raphaël -----Message d'origine----- De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] la part de Dennis Cote Envoyé : mardi 17 juin 2008 18:59 À : General Discussion of SQLite Database Objet : Re: [sqlite] Long delay for delete/select row in database Raphaël KINDT wrote: > > I'm working on a video monitoring program that uses frame grabber (12 > cameras). > This program records videos and detects some (input/output) events such as : > motion detection, blue screens (cut camera cable), I/Os 1 to 24, and much > more... > > I save all events in a database (sqlite3) with an INSERT request directly > after their detection. > Here is the table I use: > > CREATE TABLE events( > instant DATETIME, > instant_msec SMALLINT UNSIGNED, > events_int0 INT UNSIGNED, > events_int1 INT UNSIGNED, > events_int2 INT UNSIGNED, > events_int3 INT UNSIGNED > ); > > As you can see I don't use PRIMARY KEY and INDEX... > Each bits of events_intx is a particular event detected (1 = detected). > > When a video is too old, I delete it. > But I must delete all events detected during this video record. > To do that I use a SQL request described hereunder: > > DELETE FROM events WHERE ( (instant < datetime('2008-06-16 10:21:55.806')) > OR ((instant = datetime('2008-06-16 10:21:55.806')) AND (instant_msec <= > 806)) ); > > Sometimes a 'client' wants some information of "what's happened yesterday" > and he sends a request such as: > > SELECT * FROM events WHERE ( (instant < datetime('2008-06-16 23:59:59.999')) > OR ((instant = datetime('2008-06-16 23:59:59.999')) AND (instant_msec <= > 999)) ) AND ( (instant > datetime('2008-06-16 00:00:00.000')) OR ((instant = > datetime('2008-06-16 00:00:00.000')) AND (instant_msec >= 0)) ) ORDER BY > instant, instant_msec; > > All this works well for a 'light' database. > But I have forced the application to detect a lot of motion detection events > (each second during 10 days). > Thus, each second, my application is using INSERT to add a row to the event > table. > Now my database's size is about 120 MB... Very big... > When the DELETE (or SELECT) request starts, it takes about 75 seconds to be > executed (very, very loooong)!!! > > How can I reduce this time for a big database? > I must have a delay that doesn't go up to 40 msec to be efficient. > Do I have to split my database into several small databases to reduce this > delay? > Do I have to create another table and use PRIMARY KEY, INDEX etc? > I would recommend that you change your database schema an combine your timestamp string and subsecond integer field into a single floating point julian day number that provides both pieces of information in units of days (since the julian day epoch). The integer portion is the number of days, the fractional part is the point in the day to very high resolution (because SQLite uses 64 bit double precision floating point values). See http://www.sqlite.org/cvstrac/wiki?p=DateAndTimeFunctions for more info on the julian date support. CREATE TABLE events( time REAL, -- julian day number events_int0 INT UNSIGNED, events_int1 INT UNSIGNED, events_int2 INT UNSIGNED, events_int3 INT UNSIGNED ); You may want to combine your event ints into a blob field as well if you really need more than 64 event bits (since SQLite uses 64 bit integer values). With the table using real time stamps, you can create an index on the time field to greatly speed up your time based comparisons. CREATE INDEX event_time on events(time); Now your queries can be simplified to: DELETE FROM events WHERE time < julianday('2008-06-16 10:21:55.806'); and SELECT * FROM events WHERE time between julianday('2008-06-16 00:00:00.000') and julianday('2008-06-16 23:59:59.999'); These queries will use the index on time to quickly select the records of interest and ignore all the other records. HTH Dennis Cote _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users