Re: SQLite Concurrency Issue
On Dec 23, 2010, at 10:52 PM, David Nicol wrote: Are you trying to use Apache pooling within mod_perl? I tried that for a while and gave up due to lock contention issues, switching to a paradigm where I opened, transacted, closed without caching any database stuff and my locking issues went away. This means among other things, no persistent prepared statements. Nope. This is a standalone script that runs on a cron job. It's just that sometimes the previous run is still going when a new run starts. I'm porting it to PostgreSQL. That will end the issue. David
Re: SQLite Concurrency Issue
I don't believe a lockfile would be neccesary. The database serves as your lockfile. You simply re-start the transaction or re-issue the last SQL statement until it goes through or you feel like giving up. However, most RDBMS's will detect unresolvable deadlocks and abort a transaction immediately rather than waiting to timeout. The section Transaction and Database Locking in the DBD::SQLite manual on CPAN confirms this with: busy_timeout doesn't help in this case You say you've set sqlite_use_immediate_transaction but did you set it for just the transactions you've SEEN abort? This attribute may have to be set for ALL transactions including the SELECTS. David E. Wheeler da...@kineticode.com 12/22/2010 06:45 PM To ericbamba...@discover.com cc DBI Users Mailing List dbi-users@perl.org Subject Re: SQLite Concurrency Issue On Dec 22, 2010, at 3:30 PM, ericbamba...@discover.com ericbamba...@discover.com wrote: Maybe an obvious quesiton, but can you catch SQLITE_BUSY and just loop and retry in your application code? Is there any reason not to retry after a second or so? Such application retry logic is pretty standard for transaction deadlock issues. I guess I could do that, but I'm going to have to do quite a bit of extra work to add the retry stuff. I might be able to use a lockfile, as well. I'm hating both of these solutions. David Please consider the environment before printing this email.
Re: SQLite Concurrency Issue
On Dec 23, 2010, at 6:53 AM, ericbamba...@discover.com wrote: You say you've set sqlite_use_immediate_transaction but did you set it for just the transactions you've SEEN abort? This attribute may have to be set for ALL transactions including the SELECTS. No, all transactions. David
Re: SQLite Concurrency Issue
Are you trying to use Apache pooling within mod_perl? I tried that for a while and gave up due to lock contention issues, switching to a paradigm where I opened, transacted, closed without caching any database stuff and my locking issues went away. This means among other things, no persistent prepared statements. -- “The aeroplane is fatally defective. It is merely a toy—a sporting play-thing. It can never become commercially practical. -- Nikola Tesla
SQLite Concurrency Issue
My fellow database gurus, I love SQLite, but an issue with concurrency is driving me absolutely batshit. I have a program that runs every few minutes, updating an SQLite database from feeds. There can be some overlap in the runtime: a new instance may start while a previous instance is still running. In such a case, I nearly always get the dreaded database is locked error -- SQLITE_BUSY. Here's what I've done to try to avoid this error: * sqlite_use_immediate_transaction = 1 * PRAGMA journal_mode = WAL * $dbh-sqlite_busy_timeout(60_000); None of these has made a difference. Now, I recognize that there can be some lock contention, but I certainly would think that one process would try to wait for a bit for a lock to release before it gave up the ghost. But when I run this stuff, it fails *very* quickly -- the busy timeout obviously is not being used. So my question is, what other tactics can be used to improve the lock concurrency situation with SQLite? I understand that there have to be some pretty strong locks, but I'm perfectly happy for one process to wait for a lock to be freed. Why doesn't it just wait? It fails instantly! Or am I going to have to use PostgreSQL? Frustratedly, David
Re: SQLite Concurrency Issue
Maybe an obvious quesiton, but can you catch SQLITE_BUSY and just loop and retry in your application code? Is there any reason not to retry after a second or so? Such application retry logic is pretty standard for transaction deadlock issues. David E.Wheeler da...@kineticode.com 12/22/2010 05:26 PM To DBI Users Mailing List dbi-users@perl.org cc Subject SQLite Concurrency Issue My fellow database gurus, I love SQLite, but an issue with concurrency is driving me absolutely batshit. I have a program that runs every few minutes, updating an SQLite database from feeds. There can be some overlap in the runtime: a new instance may start while a previous instance is still running. In such a case, I nearly always get the dreaded database is locked error -- SQLITE_BUSY. Here's what I've done to try to avoid this error: * sqlite_use_immediate_transaction = 1 * PRAGMA journal_mode = WAL * $dbh-sqlite_busy_timeout(60_000); None of these has made a difference. Now, I recognize that there can be some lock contention, but I certainly would think that one process would try to wait for a bit for a lock to release before it gave up the ghost. But when I run this stuff, it fails *very* quickly -- the busy timeout obviously is not being used. So my question is, what other tactics can be used to improve the lock concurrency situation with SQLite? I understand that there have to be some pretty strong locks, but I'm perfectly happy for one process to wait for a lock to be freed. Why doesn't it just wait? It fails instantly! Or am I going to have to use PostgreSQL? Frustratedly, David Please consider the environment before printing this email.
Re: SQLite Concurrency Issue
On Dec 22, 2010, at 3:30 PM, ericbamba...@discover.com ericbamba...@discover.com wrote: Maybe an obvious quesiton, but can you catch SQLITE_BUSY and just loop and retry in your application code? Is there any reason not to retry after a second or so? Such application retry logic is pretty standard for transaction deadlock issues. I guess I could do that, but I'm going to have to do quite a bit of extra work to add the retry stuff. I might be able to use a lockfile, as well. I'm hating both of these solutions. David
Re: SQLite Concurrency Issue
On Wed Dec 22, 2010 at 04:44:28PM -0800, David E. Wheeler wrote: On Dec 22, 2010, at 3:30 PM, ericbamba...@discover.com wrote: Maybe an obvious quesiton, but can you catch SQLITE_BUSY and just loop and retry in your application code? Is there any reason not to retry after a second or so? Such application retry logic is pretty standard for transaction deadlock issues. I guess I could do that, but I'm going to have to do quite a bit of extra work to add the retry stuff. I might be able to use a lockfile, as well. I'm hating both of these solutions. I haven't used it, but there is an SQLite busy timeout value[1] (in ms) that can be set to specify how long SQLite sleeps when access to a table is (b)locked. The function to get/set that value is also exported via DBD::SQLite[2]. [1] http://www.sqlite.org/c3ref/busy_timeout.html [2] http://search.cpan.org/~msergeant/DBD-SQLite-1.14/lib/DBD/SQLite.pm#$dbh-%3Efunc%28_$ms,_%27busy_timeout%27_%29 Mark. -- Mark Lawrence
SQLite Concurrency Issue
My fellow database gurus, I love SQLite, but an issue with concurrency is driving me absolutely batshit. I have a program that runs every few minutes, updating an SQLite database from feeds. There can be some overlap in the runtime: a new instance may start while a previous instance is still running. In such a case, I nearly always get the dreaded database is locked error -- SQLITE_BUSY. Here's what I've done to try to avoid this error: * sqlite_use_immediate_transaction = 1 * PRAGMA journal_mode = WAL * $dbh-sqlite_busy_timeout(60_000); None of these has made a difference. Now, I recognize that there can be some lock contention, but I certainly would think that one process would try to wait for a bit for a lock to release before it gave up the ghost. But when I run this stuff, it fails *very* quickly -- the busy timeout obviously is not being used. So my question is, what other tactics can be used to improve the lock concurrency situation with SQLite? I understand that there have to be some pretty strong locks, but I'm perfectly happy for one process to wait for a lock to be freed. Why doesn't it just wait? It fails instantly! Or am I going to have to use PostgreSQL? Frustratedly, David
Re: SQLite Concurrency Issue
On Dec 22, 2010, at 6:32 PM, Mark Lawrence wrote: On Wed Dec 22, 2010 at 04:44:28PM -0800, David E. Wheeler wrote: On Dec 22, 2010, at 3:30 PM, ericbamba...@discover.com wrote: Maybe an obvious quesiton, but can you catch SQLITE_BUSY and just loop and retry in your application code? Is there any reason not to retry after a second or so? Such application retry logic is pretty standard for transaction deadlock issues. I guess I could do that, but I'm going to have to do quite a bit of extra work to add the retry stuff. I might be able to use a lockfile, as well. I'm hating both of these solutions. I haven't used it, but there is an SQLite busy timeout value[1] (in ms) that can be set to specify how long SQLite sleeps when access to a table is (b)locked. The function to get/set that value is also exported via DBD::SQLite[2]. That was the second thing I tried. No joy. David
Re: SQLite Concurrency Issue
On Wed Dec 22, 2010 at 07:16:38PM -0800, David E. Wheeler wrote: I haven't used it, but there is an SQLite busy timeout value[1] (in ms) that can be set to specify how long SQLite sleeps when access to a table is (b)locked. The function to get/set that value is also exported via DBD::SQLite[2]. That was the second thing I tried. No joy. My apologies to the list for the noise - you were quite clear in your mail that you had already tried this - I glossed over it without reading it properly... Mark. -- Mark Lawrence
Re: SQLite Concurrency Issue
On Dec 22, 2010, at 8:00 PM, Mark Lawrence wrote: The function to get/set that value is also exported via DBD::SQLite[2]. That was the second thing I tried. No joy. My apologies to the list for the noise - you were quite clear in your mail that you had already tried this - I glossed over it without reading it properly... No worries, attempts to help greatly appreciated. Best, David