Hi,

There has been a bit more discussion about this on
http://code.google.com/p/apsw/issues/detail?id=59. I guesss Roger would
be quite interested in the test case you mentioned. I have now decided
to simply deactivate shared cache again. It seems the benefits aren't
worth it in my case.

As a side note: I think that my own program may actually suffer from
problems with the Python GIL rather than SQLite locking. I discovered
then when I ran in single threaded mode, the total execution time
reduces by a factor of about 2.3...

Best,
Nikolaus


"Edzard Pasma" <[email protected]> writes:
> Hi Nikolaus,
>
> Interesting to read your findings. I assume you finally disallowed
> read_uncommitted. Trying to explain why the wait times can get longer,
> I come to two reasons.
>
> - the time slots where a read-lock is obtained become smaller. 
> This can be significant if there are a lot of transactions with little
> time in between. With the busy handling by polling, a reader may mis
> slots. This will sure be improved by Unlock Notification.
>
> - the time slots where a read-lock can not be obtained become longer.
> This can cause problems if there are long-running transaction. The
> Unlock Notification feature is not going to help here.
>
> I still like to share an observation from my own test, which
> reproduces the first case. I start four reading threads and one
> writer. Each thread performs a fixed number of cycles. Apart from the
> wait times, also the overall elapsed time is measured. Indeed the
> maximum wait-time gets bigger if shared cache mode is enabled.
> Interestingly, this does not apply to the elapsed time, which is still
> reduced. Thus, an increase of the maximum wait-time is not necessarily
> a worse performance. By the way, this was measured on a
> single-processor system.
>
> Hope this is useful to know.
>
> Best regards, Edzard.
>
> On Fri, Aug 7, 2009 at 1:45 PM, Nikolaus
> Rath<[email protected]> wrote:
>> Hello,
>>
>> I have program that continuously runs several threads (about 4 to
>> 10)
>> which concurrently read and write to an SQLite database. From
>> http://www.sqlite.org/sharedcache.html it seems to me that I should
>> be
>> using shared cache mode.
>>
>> Until now, I have set the busy timeout to 500 msec and never had to
> deal
>> with SQLITE_BUSY errors. However, there seems to be no corresponding
>> function for the SQLITE_LOCKED errors that are generated in shared 
> cache
>> mode. So I changed the code manually sleep for a random amount (0 to
> 100
>> msec) and then try to execute the statement again if it encounters
>> SQLITE_LOCKED. But now the threads are often waiting for more than 1
>> second before they finally get their query executed.
>>
>> I suspect this is because the random sleep is wasting a lot of time,
>> while without shared cache (and with enabled busy timeout) a thread
>> blocks and gets revived as soon as the lock on the database is
>> released.
>>
>>
>> How can I avoid this problem? Is there a way to set a busy timeout
>> for
>> SQLITE_LOCKED as well? Or a more clever method instead of sleeping
>> for
>> random amounts? Or should I just avoid using shared cache mode?
> _______________________________________________
> sqlite-users mailing list
> [email protected]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


   -Nikolaus

-- 
 »Time flies like an arrow, fruit flies like a Banana.«

  PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6  02CF A9AD B7F8 AE4E 425C

_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to