Hi Simon,
than you for your answer.

On Tue, Aug 7, 2018 at 2:09 PM, Simon Slavin <slav...@bigfraud.org> wrote:

> On 7 Aug 2018, at 12:55pm, Gerlando Falauto <gerlando.fala...@gmail.com>
> wrote:
>
> > I'm trying to implement a logging system based on SQLite, using python3
> > package apsw.
> > There's one process constantly writing and another one reading.
> > From time to time I get an exception from the writer, complaining the
> > database is locked.
>
> Please set a time of at least 10,000 milliseconds for /all/ connections,
> both reading and writing:
>
>     Connection.setbusytimeout(10000)
>
> <https://rogerbinns.github.io/apsw/connection.html?highlight=timeout#apsw.
> Connection.setbusytimeout>
>

Hmm... are you saying the writer could potentially block for up to 10
seconds?
If that's the case then I should rethink the whole logging process cause it
might end up losing incoming data if waiting for too long.
In any case, I still don't understand whether the reader would block the
writer or not, and in what phase.
A reader could potentially take a long time (even longer than 10 seconds)
to read all the data...


> If you're already doing this, please post again, telling us whether you're
> using two separate connections or passing the connection handle from
> process to process.
>

It's two separate connections. Is that bad or good?

Thank you,
Gerlando
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to