> According to http://www.sqlite.org/lang_transaction.html, in deferred > mode, sqlite creates a RESERVED lock on the DB when it starts a write > operation, but this should allow other SHARE locks at the same time, > implying other processes should still be able to query the database > during the write transation.
Also see here http://www.sqlite.org/lockingv3.html - if cache is full deferred transaction takes EXCLUSIVE lock and spills some changes to disk even before actual commit. > Does this sound like correct behaviour, or not? If correct, should my > second process be getting SQLITE_BUSY returned from sqlite3_step()? Yes, everything sounds like correct behavior. And if you have set busy_timeout to some very big value then sqlite3_step() won't return SQLITE_BUSY to you until this very big time has passed. > (Note that the first process is adding data to the same table that the > second process wants to do a SELECT query on). This actually doesn't matter if you access to the database from different processes. > If this is all correct... what would be the best way to allow me to add > (or delete!) lots of records from a table in one process whilst still > allowing the other process to query that table's data during the insert > time? You can split your insert transactions - e.g. 1000 records per transaction, maybe 10000 will work well too. This will make the whole inserting time bigger but will make things better for readers. Pavel On Tue, May 25, 2010 at 11:24 AM, Nick Shaw <nick.s...@citysync.co.uk> wrote: > Hi all, > > > > I've got a database that is accessed by two processes on the same PC. > When I add a large number of records to a table from one process (all > records wrapped in a BEGIN DEFERRED / END so it's committed to disk all > at once; and we're talking about adding 500,000 recs to a DB containing > around 3 million recs, so it takes some time), the other process gets > stuck in the sqlite3_step() call when doing a SELECT call. It doesn't > return SQLITE_BUSY, it just sits there waiting. It only continues after > the first process calls END to complete the transaction. > > > > According to http://www.sqlite.org/lang_transaction.html, in deferred > mode, sqlite creates a RESERVED lock on the DB when it starts a write > operation, but this should allow other SHARE locks at the same time, > implying other processes should still be able to query the database > during the write transation. > > > > Does this sound like correct behaviour, or not? If correct, should my > second process be getting SQLITE_BUSY returned from sqlite3_step()? > > (Note that the first process is adding data to the same table that the > second process wants to do a SELECT query on). > > > > If this is all correct... what would be the best way to allow me to add > (or delete!) lots of records from a table in one process whilst still > allowing the other process to query that table's data during the insert > time? An in-memory database is obviously an option for the second > process, but that has an impact on RAM usage and requiring the second > process to know when the database has changed on disk. > > > > Thanks, > > Nick. > > _______________________________________________ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users