Jean-Christophe,
from my experience it depends. We have several clients accessing a database
shared on a 2003 server and no corruption took place so far, but sometimes
freezing of a client was possible. Also when I did some artificial tests,
when several clients tried to write on a constant basis there were cases
when one of them could also freeze. Consider doing some die hard tests with
your configuration. This should not be the same scheme as yours, the only
thing you should additionally do from time to time is PRAGMA
integrity_check. After a whole night test and thousands of successful writes
from several computers you will at least have probability arguments on your
side )

Max Vlasov,
maxerist.net

On Wed, Apr 28, 2010 at 9:43 AM, Jean-Christophe Deschamps 
<j...@q-e-d.org>wrote:

> Hi gurus,
>
> I'm aware of the limitations that generally preclude using SQLite over
> a network.
> Anyway do you think that doing so with every read or write operation
> wrapped inside an explicit exclusive transaction can be a safe way to
> run a DB for a group of 10 people under low load (typically 2Kb read or
> 100b writes per user per minute)?
> Schema will be very simple and queries / inserts as well.  Speed is not
> a real concern.
>
> So do you believe DB corruption can still occur in this context,
> knowing that the use will be for a very limited time (2-3 weeks) and
> low volume (~50K rows)?
>
> Using one of the available client/server wrappers is not a suitable option.
> This is targeted at Windows, XP or later.
>
> Do you have a better idea to make the thing more robust, even at
> additional cost in concurrency and/or speed.
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to