I have an application which on a single thread writes data to a database.
This writing can result in millions of writes to the database in a single
transaction (started with BEGIN TRANSACTION.)  I also have other threads and
processes which read the same database, but never write to it, and never
start transactions  (they just issue select commands against it.)  In the
writing application, I have one thread which originally opens the database,
but then hands it off to the writer thread, and never touches it again.

The issue that I am seeing is that when the transaction is started,
everything behaves as expected; the writer happily calls inserts within his
connection, and they go to wherever the isolated modification data goes,
which I presume is the journal file.  Initially, other processes and threads
can freely read the database, and all is good.  Fast forward a bit, to when
a large number of inserts have occurred.  It seems that when the page cache
gets filled up, SQLite tries to find some free pages, which causes an
exclusive lock to be obtained within the context of the transaction, that is
not relinquished until the completion of the transaction.  This causes some
pain, because my readers that were happily able to read from the original
database are now stopped up until the potentially long running transaction
completes (they either get database is locked errors, or in my case, their
infintely waiting busy handlers do just that.)

My question is; am I assessing the situation correctly, and if so, is there
anything I can do to avoid this rather nasty situation?  I would rather not
take the periodic-commit approach, and increasing the page cache size would
potentially cause the use of more memory than I can spare.  Disk usage,
however, is not a problem; I can use as much as is necessary.

Thanks very much.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to