Re: [sqlite] Please help. Read only process being blocked by writer process.
Alejandro Martínez wrote: Thanks Richard, that makes perfect sense. Thanks Howard, but i don't know what you are talking about, so i will google "copy-on-write". See the papers and presentations here: http://www.symas.com/mdb Source code for SQLite is here: http://gitorious.org/mdb Григорий Григоренко, Interesting! I'll consider this approach if at some point i'm able to go "scorched earth" and start this from scratch, but at this point i would have to change too much stuff. I will go with the WAL solution for now. I'm just worried a buggy process could hang while having a prepared statement open and cause the wal file to grow forever, then causing errors in other processes. -- -- Howard Chu CTO, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc/ Chief Architect, OpenLDAP http://www.openldap.org/project/ ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Please help. Read only process being blocked by writer process.
Thanks Richard, that makes perfect sense. Thanks Howard, but i don't know what you are talking about, so i will google "copy-on-write". Григорий Григоренко, Interesting! I'll consider this approach if at some point i'm able to go "scorched earth" and start this from scratch, but at this point i would have to change too much stuff. I will go with the WAL solution for now. I'm just worried a buggy process could hang while having a prepared statement open and cause the wal file to grow forever, then causing errors in other processes. On Wed, Nov 28, 2012 at 3:02 PM, Григорий Григоренкоwrote: > Hi, > > > CREATE TABLE rev(current); INSERT INTO rev VALUES(0); > CREATE TABLE data(..., revision); > > Readers: > SELECT * FROM data JOIN rev ON revision = current WHERE ... ; > // or "SELECT current FROM rev" into var and passing it value in "SELECT * > FROM data WHERE revision=?" > > Writer: > > // insert new records, old records remains > > SELECT current + 1 FROM rev; // into some variable > > BEGIN; > INSERT INTO data(..., revision) VALUES (..., current ); > INSERT INTO data(..., revision) VALUES (..., current ); > ... > ... repeat 1000 times > ... > COMMIT; > > BEGIN; > INSERT INTO data(..., revision) VALUES (..., current ); > INSERT INTO data(..., revision) VALUES (..., current ); > ... > ... repeat 1000 times > ... > COMMIT; > > // repeat inserting until all records are inserted; 1000 is a number of > records taken by a wild guess ) > > // now switch readers to new records > BEGIN; UPDATE rev SET current = current + 1; COMMIT; // no need for a > begin/commit just to point that this runs inside its own transaction > > // now delete old records again incrementally > > // repeat this block until records stop deleting from table > BEGIN; > SELECT Max(rowid) - 1000 FROM data; // into variable MaxId > DELETE FROM data WHERE revision = current - 1 AND rowid > MaxId; > COMMIT; > > // done, there are only new records in a table, repeat above steps to > insert new bunch of records > > > > > Regads, > GG > > > Wed 28 Nov 2012 09:47:50 от Alejandro Martínez : > > > > > > > > > > > > > > >I have one process that each 30 minutes refills several tables in this > > > manner: > > > > > > sqlite3_open_v2(CACHEDB_PATH, _conn, SQLITE_OPEN_CREATE | > > > SQLITE_OPEN_READWRITE, NULL) > > > > > > - For each table: > > > > > > begin deferred transaction; delete from [table]; > > > insert into table ... > > > insert into table ... > > > insert into table ... > > > insert into table ... > > > [up to 180.000 rows] > > > commit; > > > > > > and sometimes the commit fails, so it is retried. (why would it fail? its > > > the only writter) > > > > > > And then i have many other processes that open that sqlite database read > > > only. sqlite3_open_v2(_dbfile, _conn, SQLITE_OPEN_READONLY, NULL) > > > and sqlite3_busy_timeout(sqcache_conn, 5000) > > > > > > These processes create very simple prepared statements to query that > tables. > > > > > > And the big problem i'm having, is that when i step these prepared > > > statements, they lock for 5 seconds and then fail. > > > > > > And i put that busy timeout just for completeness, cause i wasn't expecting > > > any locking because for being a read only query. > > > > > > I really need these queries not to lock or fail. > > > > > > What am i doing wrong? > > > Any suggestions? > > > > > > Thank you, > > > Alejandro > > > ___ > > > sqlite-users mailing list > > > >sqlite-users@sqlite.org > > > >http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > > > > > > > > > > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Please help. Read only process being blocked by writer process.
Hi, CREATE TABLE rev(current); INSERT INTO rev VALUES(0); CREATE TABLE data(..., revision); Readers: SELECT * FROM data JOIN rev ON revision = current WHERE ... ; // or "SELECT current FROM rev" into var and passing it value in "SELECT * FROM data WHERE revision=?" Writer: // insert new records, old records remains SELECT current + 1 FROM rev; // into some variable BEGIN; INSERT INTO data(..., revision) VALUES (..., current ); INSERT INTO data(..., revision) VALUES (..., current ); ... ... repeat 1000 times ... COMMIT; BEGIN; INSERT INTO data(..., revision) VALUES (..., current ); INSERT INTO data(..., revision) VALUES (..., current ); ... ... repeat 1000 times ... COMMIT; // repeat inserting until all records are inserted; 1000 is a number of records taken by a wild guess ) // now switch readers to new records BEGIN; UPDATE rev SET current = current + 1; COMMIT; // no need for a begin/commit just to point that this runs inside its own transaction // now delete old records again incrementally // repeat this block until records stop deleting from table BEGIN; SELECT Max(rowid) - 1000 FROM data; // into variable MaxId DELETE FROM data WHERE revision = current - 1 AND rowid > MaxId; COMMIT; // done, there are only new records in a table, repeat above steps to insert new bunch of records Regads, GG Wed 28 Nov 2012 09:47:50 от Alejandro Martínez: > > > > >I have one process that each 30 minutes refills several tables in this > manner: > > sqlite3_open_v2(CACHEDB_PATH, _conn, SQLITE_OPEN_CREATE | > SQLITE_OPEN_READWRITE, NULL) > > - For each table: > > begin deferred transaction; delete from [table]; > insert into table ... > insert into table ... > insert into table ... > insert into table ... > [up to 180.000 rows] > commit; > > and sometimes the commit fails, so it is retried. (why would it fail? its > the only writter) > > And then i have many other processes that open that sqlite database read > only. sqlite3_open_v2(_dbfile, _conn, SQLITE_OPEN_READONLY, NULL) > and sqlite3_busy_timeout(sqcache_conn, 5000) > > These processes create very simple prepared statements to query that tables. > > And the big problem i'm having, is that when i step these prepared > statements, they lock for 5 seconds and then fail. > > And i put that busy timeout just for completeness, cause i wasn't expecting > any locking because for being a read only query. > > I really need these queries not to lock or fail. > > What am i doing wrong? > Any suggestions? > > Thank you, > Alejandro > ___ > sqlite-users mailing list > >sqlite-users@sqlite.org > >http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Please help. Read only process being blocked by writer process.
Richard Hipp wrote: On Wed, Nov 28, 2012 at 9:58 AM, Alejandro Martínezwrote: And wouldn't PRAGMA read_uncommitted achieve the effect i was expecting? Or does that cause other problems? read_uncommitted only works if both the read and writer are in the same process and are using shared cache. Reading "old" or inconsistent data would not be a problem for me. (as long as it is not corrupted data). That's really the crux of the problem. Suppose the writer has needing to split a btree node, for example. To do this, the writer would typically write two new child pages and then overwrite the old b-tree page with the parent page. If the writer were part way through this sequence when the reader comes along, the reader would see corrupt data, since the btree structure would be only partially updated and hence not well-formed. That's one of the fundamental problems with update-in-place DB designs. (Aside from their crash vulnerability and expensive crash recovery...) MVCC via copy-on-write has none of these issues. On Wed, Nov 28, 2012 at 11:20 AM, Richard Hipp wrote: On Wed, Nov 28, 2012 at 7:09 AM, Alejandro Martínez wrote: On Wed, Nov 28, 2012 at 6:56 AM, Alejandro Martínez < elpeq...@gmail.com wrote: Is that the only way? When i had done that in the past, the wal file grew constantly and i am afraid it could fill the hard disk. That could happen if say... one of the reading processes doesn't properly sqlite3_reset a prepared statement after stepping it. right? Correct. The WAL file will grow until a checkpoint resets it. And a checkpoint cannot reset the WAL file while there is a pending transaction. So if you have a statement holding a transaction open, the WAL file will grow without bound. The solution there is to not hold read transactions open indefinitely. Call sqlite3_reset() when you are done with a statement so that its implied read transaction will close. Thank you for your quick answer. On Wed, Nov 28, 2012 at 9:52 AM, Richard Hipp wrote: PRAGMA journal_mode=WAL -- -- Howard Chu CTO, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc/ Chief Architect, OpenLDAP http://www.openldap.org/project/ ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Please help. Read only process being blocked by writer process.
On Wed, Nov 28, 2012 at 9:58 AM, Alejandro Martínezwrote: > And wouldn't PRAGMA read_uncommitted achieve the effect i was expecting? Or > does that cause other problems? > read_uncommitted only works if both the read and writer are in the same process and are using shared cache. > > Reading "old" or inconsistent data would not be a problem for me. (as long > as it is not corrupted data). > That's really the crux of the problem. Suppose the writer has needing to split a btree node, for example. To do this, the writer would typically write two new child pages and then overwrite the old b-tree page with the parent page. If the writer were part way through this sequence when the reader comes along, the reader would see corrupt data, since the btree structure would be only partially updated and hence not well-formed. > > > On Wed, Nov 28, 2012 at 11:20 AM, Richard Hipp wrote: > > > On Wed, Nov 28, 2012 at 7:09 AM, Alejandro Martínez > >wrote: > > > > > Ok, i will probably do that. Thank you. > > > > > > But i'd like to know. Why doesn't this work without wal? A read only > > > operation shouldn't block, right? > > > > > > > If you are not running WAL, then the database is updated directly. That > > means that there can be no readers active when a write is in progress > > because then the readers would see an incomplete and uncommitted > > transaction. > > > > > > > > > > > > And regarding the commit failing, does that need a busy timeout handler > > > too? From documentation i though it would just wait until all readers > are > > > done reading and then write. And that further incoming readers would > wait > > > for those 5 seconds. And i was expecting not to really wait, as the > > commit > > > should be pretty quick, right? > > > > > > I'm puzzled. > > > > > > > > > > > > On Wed, Nov 28, 2012 at 10:01 AM, Richard Hipp wrote: > > > > > > > On Wed, Nov 28, 2012 at 6:56 AM, Alejandro Martínez < > > elpeq...@gmail.com > > > > >wrote: > > > > > > > > > Is that the only way? > > > > > > > > > > When i had done that in the past, the wal file grew constantly and > i > > am > > > > > afraid it could fill the hard disk. > > > > > > > > > > That could happen if say... one of the reading processes doesn't > > > > > properly sqlite3_reset a prepared statement after stepping it. > right? > > > > > > > > > > > > > Correct. The WAL file will grow until a checkpoint resets it. And a > > > > checkpoint cannot reset the WAL file while there is a pending > > > transaction. > > > > So if you have a statement holding a transaction open, the WAL file > > will > > > > grow without bound. > > > > > > > > The solution there is to not hold read transactions open > indefinitely. > > > > Call sqlite3_reset() when you are done with a statement so that its > > > implied > > > > read transaction will close. > > > > > > > > > > > > > > > > > > Thank you for your quick answer. > > > > > > > > > > > > > > > > > > > > On Wed, Nov 28, 2012 at 9:52 AM, Richard Hipp > > wrote: > > > > > > > > > > > PRAGMA journal_mode=WAL > > > > > ___ > > > > > sqlite-users mailing list > > > > > sqlite-users@sqlite.org > > > > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > > > > > > > > > > > > > > > > > > > > -- > > > > D. Richard Hipp > > > > d...@sqlite.org > > > > ___ > > > > sqlite-users mailing list > > > > sqlite-users@sqlite.org > > > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > > > > > > ___ > > > sqlite-users mailing list > > > sqlite-users@sqlite.org > > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > > > > > > > > > > -- > > D. Richard Hipp > > d...@sqlite.org > > ___ > > sqlite-users mailing list > > sqlite-users@sqlite.org > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > -- D. Richard Hipp d...@sqlite.org ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Please help. Read only process being blocked by writer process.
And wouldn't PRAGMA read_uncommitted achieve the effect i was expecting? Or does that cause other problems? Reading "old" or inconsistent data would not be a problem for me. (as long as it is not corrupted data). On Wed, Nov 28, 2012 at 11:20 AM, Richard Hippwrote: > On Wed, Nov 28, 2012 at 7:09 AM, Alejandro Martínez >wrote: > > > Ok, i will probably do that. Thank you. > > > > But i'd like to know. Why doesn't this work without wal? A read only > > operation shouldn't block, right? > > > > If you are not running WAL, then the database is updated directly. That > means that there can be no readers active when a write is in progress > because then the readers would see an incomplete and uncommitted > transaction. > > > > > > > And regarding the commit failing, does that need a busy timeout handler > > too? From documentation i though it would just wait until all readers are > > done reading and then write. And that further incoming readers would wait > > for those 5 seconds. And i was expecting not to really wait, as the > commit > > should be pretty quick, right? > > > > I'm puzzled. > > > > > > > > On Wed, Nov 28, 2012 at 10:01 AM, Richard Hipp wrote: > > > > > On Wed, Nov 28, 2012 at 6:56 AM, Alejandro Martínez < > elpeq...@gmail.com > > > >wrote: > > > > > > > Is that the only way? > > > > > > > > When i had done that in the past, the wal file grew constantly and i > am > > > > afraid it could fill the hard disk. > > > > > > > > That could happen if say... one of the reading processes doesn't > > > > properly sqlite3_reset a prepared statement after stepping it. right? > > > > > > > > > > Correct. The WAL file will grow until a checkpoint resets it. And a > > > checkpoint cannot reset the WAL file while there is a pending > > transaction. > > > So if you have a statement holding a transaction open, the WAL file > will > > > grow without bound. > > > > > > The solution there is to not hold read transactions open indefinitely. > > > Call sqlite3_reset() when you are done with a statement so that its > > implied > > > read transaction will close. > > > > > > > > > > > > > > Thank you for your quick answer. > > > > > > > > > > > > > > > > On Wed, Nov 28, 2012 at 9:52 AM, Richard Hipp > wrote: > > > > > > > > > PRAGMA journal_mode=WAL > > > > ___ > > > > sqlite-users mailing list > > > > sqlite-users@sqlite.org > > > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > > > > > > > > > > > > > > > -- > > > D. Richard Hipp > > > d...@sqlite.org > > > ___ > > > sqlite-users mailing list > > > sqlite-users@sqlite.org > > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > > > > ___ > > sqlite-users mailing list > > sqlite-users@sqlite.org > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > > > > > -- > D. Richard Hipp > d...@sqlite.org > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Please help. Read only process being blocked by writer process.
On Wed, Nov 28, 2012 at 7:09 AM, Alejandro Martínezwrote: > Ok, i will probably do that. Thank you. > > But i'd like to know. Why doesn't this work without wal? A read only > operation shouldn't block, right? > If you are not running WAL, then the database is updated directly. That means that there can be no readers active when a write is in progress because then the readers would see an incomplete and uncommitted transaction. > > And regarding the commit failing, does that need a busy timeout handler > too? From documentation i though it would just wait until all readers are > done reading and then write. And that further incoming readers would wait > for those 5 seconds. And i was expecting not to really wait, as the commit > should be pretty quick, right? > > I'm puzzled. > > > > On Wed, Nov 28, 2012 at 10:01 AM, Richard Hipp wrote: > > > On Wed, Nov 28, 2012 at 6:56 AM, Alejandro Martínez > >wrote: > > > > > Is that the only way? > > > > > > When i had done that in the past, the wal file grew constantly and i am > > > afraid it could fill the hard disk. > > > > > > That could happen if say... one of the reading processes doesn't > > > properly sqlite3_reset a prepared statement after stepping it. right? > > > > > > > Correct. The WAL file will grow until a checkpoint resets it. And a > > checkpoint cannot reset the WAL file while there is a pending > transaction. > > So if you have a statement holding a transaction open, the WAL file will > > grow without bound. > > > > The solution there is to not hold read transactions open indefinitely. > > Call sqlite3_reset() when you are done with a statement so that its > implied > > read transaction will close. > > > > > > > > > > Thank you for your quick answer. > > > > > > > > > > > > On Wed, Nov 28, 2012 at 9:52 AM, Richard Hipp wrote: > > > > > > > PRAGMA journal_mode=WAL > > > ___ > > > sqlite-users mailing list > > > sqlite-users@sqlite.org > > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > > > > > > > > > > -- > > D. Richard Hipp > > d...@sqlite.org > > ___ > > sqlite-users mailing list > > sqlite-users@sqlite.org > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > -- D. Richard Hipp d...@sqlite.org ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Please help. Read only process being blocked by writer process.
Ok, i will probably do that. Thank you. But i'd like to know. Why doesn't this work without wal? A read only operation shouldn't block, right? And regarding the commit failing, does that need a busy timeout handler too? From documentation i though it would just wait until all readers are done reading and then write. And that further incoming readers would wait for those 5 seconds. And i was expecting not to really wait, as the commit should be pretty quick, right? I'm puzzled. On Wed, Nov 28, 2012 at 10:01 AM, Richard Hippwrote: > On Wed, Nov 28, 2012 at 6:56 AM, Alejandro Martínez >wrote: > > > Is that the only way? > > > > When i had done that in the past, the wal file grew constantly and i am > > afraid it could fill the hard disk. > > > > That could happen if say... one of the reading processes doesn't > > properly sqlite3_reset a prepared statement after stepping it. right? > > > > Correct. The WAL file will grow until a checkpoint resets it. And a > checkpoint cannot reset the WAL file while there is a pending transaction. > So if you have a statement holding a transaction open, the WAL file will > grow without bound. > > The solution there is to not hold read transactions open indefinitely. > Call sqlite3_reset() when you are done with a statement so that its implied > read transaction will close. > > > > > > Thank you for your quick answer. > > > > > > > > On Wed, Nov 28, 2012 at 9:52 AM, Richard Hipp wrote: > > > > > PRAGMA journal_mode=WAL > > ___ > > sqlite-users mailing list > > sqlite-users@sqlite.org > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > > > > > -- > D. Richard Hipp > d...@sqlite.org > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Please help. Read only process being blocked by writer process.
On Wed, Nov 28, 2012 at 6:56 AM, Alejandro Martínezwrote: > Is that the only way? > > When i had done that in the past, the wal file grew constantly and i am > afraid it could fill the hard disk. > > That could happen if say... one of the reading processes doesn't > properly sqlite3_reset a prepared statement after stepping it. right? > Correct. The WAL file will grow until a checkpoint resets it. And a checkpoint cannot reset the WAL file while there is a pending transaction. So if you have a statement holding a transaction open, the WAL file will grow without bound. The solution there is to not hold read transactions open indefinitely. Call sqlite3_reset() when you are done with a statement so that its implied read transaction will close. > > Thank you for your quick answer. > > > > On Wed, Nov 28, 2012 at 9:52 AM, Richard Hipp wrote: > > > PRAGMA journal_mode=WAL > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > -- D. Richard Hipp d...@sqlite.org ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Please help. Read only process being blocked by writer process.
Is that the only way? When i had done that in the past, the wal file grew constantly and i am afraid it could fill the hard disk. That could happen if say... one of the reading processes doesn't properly sqlite3_reset a prepared statement after stepping it. right? Thank you for your quick answer. On Wed, Nov 28, 2012 at 9:52 AM, Richard Hippwrote: > PRAGMA journal_mode=WAL ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Please help. Read only process being blocked by writer process.
On Wed, Nov 28, 2012 at 6:47 AM, Alejandro Martínezwrote: > I have one process that each 30 minutes refills several tables in this > manner: > > sqlite3_open_v2(CACHEDB_PATH, _conn, SQLITE_OPEN_CREATE | > SQLITE_OPEN_READWRITE, NULL) > > - For each table: > > begin deferred transaction; delete from [table]; > insert into table ... > insert into table ... > insert into table ... > insert into table ... > [up to 180.000 rows] > commit; > > and sometimes the commit fails, so it is retried. (why would it fail? its > the only writter) > > And then i have many other processes that open that sqlite database read > only. sqlite3_open_v2(_dbfile, _conn, SQLITE_OPEN_READONLY, NULL) > and sqlite3_busy_timeout(sqcache_conn, 5000) > > These processes create very simple prepared statements to query that > tables. > > And the big problem i'm having, is that when i step these prepared > statements, they lock for 5 seconds and then fail. > > And i put that busy timeout just for completeness, cause i wasn't expecting > any locking because for being a read only query. > > I really need these queries not to lock or fail. > Enable WAL mode. "PRAGMA journal_mode=WAL". http://www.sqlite.org/wal.html > > What am i doing wrong? > Any suggestions? > > Thank you, > Alejandro > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > -- D. Richard Hipp d...@sqlite.org ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Please help. Read only process being blocked by writer process.
Extra info. Its running on this: SunOS 5.10 Generic_144488-11 sun4u sparc SUNW,Sun-Fire-V490 On Wed, Nov 28, 2012 at 9:47 AM, Alejandro Martínezwrote: > I have one process that each 30 minutes refills several tables in this > manner: > > sqlite3_open_v2(CACHEDB_PATH, _conn, SQLITE_OPEN_CREATE | > SQLITE_OPEN_READWRITE, NULL) > > - For each table: > > begin deferred transaction; delete from [table]; > insert into table ... > insert into table ... > insert into table ... > insert into table ... > [up to 180.000 rows] > commit; > > and sometimes the commit fails, so it is retried. (why would it fail? its > the only writter) > > And then i have many other processes that open that sqlite database read > only. sqlite3_open_v2(_dbfile, _conn, SQLITE_OPEN_READONLY, NULL) > and sqlite3_busy_timeout(sqcache_conn, 5000) > > These processes create very simple prepared statements to query that > tables. > > And the big problem i'm having, is that when i step these prepared > statements, they lock for 5 seconds and then fail. > > And i put that busy timeout just for completeness, cause i wasn't > expecting any locking because for being a read only query. > > I really need these queries not to lock or fail. > > What am i doing wrong? > Any suggestions? > > Thank you, > Alejandro > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Please help. Read only process being blocked by writer process.
I have one process that each 30 minutes refills several tables in this manner: sqlite3_open_v2(CACHEDB_PATH, _conn, SQLITE_OPEN_CREATE | SQLITE_OPEN_READWRITE, NULL) - For each table: begin deferred transaction; delete from [table]; insert into table ... insert into table ... insert into table ... insert into table ... [up to 180.000 rows] commit; and sometimes the commit fails, so it is retried. (why would it fail? its the only writter) And then i have many other processes that open that sqlite database read only. sqlite3_open_v2(_dbfile, _conn, SQLITE_OPEN_READONLY, NULL) and sqlite3_busy_timeout(sqcache_conn, 5000) These processes create very simple prepared statements to query that tables. And the big problem i'm having, is that when i step these prepared statements, they lock for 5 seconds and then fail. And i put that busy timeout just for completeness, cause i wasn't expecting any locking because for being a read only query. I really need these queries not to lock or fail. What am i doing wrong? Any suggestions? Thank you, Alejandro ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users