Hi Raphaël,

Since you are dealing with intervals of numbers, perhaps recent 
discussion of
rtree indexing will help.

Besides, separating instant into highly used level of unit may be also 
useful.
For example, instant_date and instant_time.

Another rough method is applying PRAGMA synchronous = OFF and/or
PRAGMA journal_mode = OFF.

Regards,
/Mike/

Raphaël KINDT wrote:
> Hi everybody,
>
> I'm working on a video monitoring program that uses frame grabber (12
> cameras).
> This program records videos and detects some (input/output) events such as :
> motion detection, blue screens (cut camera cable), I/Os 1 to 24, and much
> more...
>
> I save all events in a database (sqlite3) with an INSERT request directly
> after their detection.
> Here is the table I use:
>
> CREATE TABLE events(
>    instant        DATETIME,
>    instant_msec   SMALLINT UNSIGNED,
>    events_int0    INT UNSIGNED,
>    events_int1    INT UNSIGNED,
>    events_int2    INT UNSIGNED,
>    events_int3    INT UNSIGNED
> );
>
> As you can see I don't use PRIMARY KEY and INDEX...
> Each bits of events_intx is a particular event detected (1 = detected).
>
> When a video is too old, I delete it.
> But I must delete all events detected during this video record.
> To do that I use a SQL request described hereunder:
>
> DELETE FROM events WHERE ( (instant < datetime('2008-06-16 10:21:55.806'))
> OR ((instant = datetime('2008-06-16 10:21:55.806')) AND (instant_msec <=
> 806)) );
>
> Sometimes a 'client' wants some information of "what's happened yesterday"
> and he sends a request such as:
>
> SELECT * FROM events WHERE ( (instant < datetime('2008-06-16 23:59:59.999'))
> OR ((instant = datetime('2008-06-16 23:59:59.999')) AND (instant_msec <=
> 999)) ) AND ( (instant > datetime('2008-06-16 00:00:00.000')) OR ((instant =
> datetime('2008-06-16 00:00:00.000')) AND (instant_msec >= 0)) ) ORDER BY
> instant, instant_msec;
>
> All this works well for a 'light' database.
> But I have forced the application to detect a lot of motion detection events
> (each second during 10 days).
> Thus, each second, my application is using INSERT to add a row to the event
> table.
> Now my database's size is about 120 MB... Very big...
> When the DELETE (or SELECT) request starts, it takes about 75 seconds to be
> executed (very, very loooong)!!!
>
> How can I reduce this time for a big database?
> I must have a delay that doesn't go up to 40 msec to be efficient.
> Do I have to split my database into several small databases to reduce this
> delay?
> Do I have to create another table and use PRIMARY KEY, INDEX etc?
>
> Perhaps I can create a database only in memory and another database in a
> file.
> The database in memory is used to do the seeking and the database in file is
> used for the backup.
> Those databases must be synchronized...
>
>
> Thanks for all advices.
>
>
> KINDT Raphaël
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>   

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to