hi,

thanks for the quick answer :-)
i always thought there is always an index for "rowid" ? thats the reason why i implemented it like this... i need to store a maximum amount of data in the database (e.b. max 100000 entries or max 10 days)
any idea how i could implement it in another way ?

you're right, the pc has a lot of other tasks running !
e.g. there is a ms sql server installed and i dont know exactly how much disk-io this server is doing !
thanks a lot !

cu, gg




Igor Tandetnik wrote:
Günter Greschenz <[EMAIL PROTECTED]> wrote:
i've created this table and trigger (no index !):
   CREATE TABLE msgs (date ntext, type ntext, dir ntext, s integer, f
integer, msg ntext);
   CREATE TRIGGER delete_log after insert on msgs begin delete from
msgs where rowid%100000=new.rowid%100000 and rowid!=new.rowid; end;

my database is filled with about 40000 rows

The condition in the trigger forces full scan of all the records in msgs table. So on every insert, you scan all the previously inserted records. This means that inserting N records takes O(N^2) time.

You should reconsider your design: what you have now can't be improved.

the problem now is: sometimes (only sometimes !!!) inserting into this
table is very slow:

My guess is, it's just an artefact of caching, or perhaps some other application doing disk I/O at the same time. Realize that on every insert SQLite has to read the whole 256MB file into memory.If you have enough memory, it might stay in the cache, which would speed up subsequent inserts. But under memory pressure the cache will be discarded.

Igor Tandetnik

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------




-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to