"Shawn Anderson" <[EMAIL PROTECTED]> schrieb im
Newsbeitrag news:[EMAIL PROTECTED]

> I've read many things, some conflicting, about concurrent
> users on a Sqlite database.
> The feeling I get is that Sqlite is good for a standalone desktop
> database, but should not be used in a concurrent user situation
> (like a website with many users).

> Is there a definitive answer?
Of course not... ;-)

If your scenario is a "typical one" - with much more reads
than writes against the DB, then SQLite is a wonderful
and fast performing multiuser-engine, if hosted behind
an appropriate server-layer.

>From my experience with the server-layer we use here
(on Windows, or Linux/Wine) - using a ThreadPool, adjusted
to 8 threads, each thread hosting a separate sqlite-connection
(no shared cache) and working only with the default-
"sqlite-locking-helpers" (watching for SQLite_Busy, etc.),
the Read-Direction scales very good with concurrent
requests.
>From 8 concurrent clients (firing requests in an endless-
loop) one can reach ca. 1200 responses/sec on smaller
resultsets (e.g. 1-10 rows on 5 columns).
And for somewhat larger resultsets (ca. 800Rows
on 15 columns as e.g. "Select * from Orders" against
NWind.db) the server can deliver 300-600 responses
per second.
These are roundtrip-results which are based on disconnected
recordsets, fully transferred to and deserialized at the
clientside over sockets (including visualizing in a DataGrid).
All that on a cheap DualCore (2.2GHz AMD), on a server-
machine with 2GB-Ram.

In our tests we found, that the whole thing works ca. 2-3
times faster than MS-SQLServer working against the
same NWind-Database (using disconnected ADO-
Recordsets and the appropriate serializing methods) in
that concurrent-readers-scenario.

In fact it works that fast, that the limiting factor is not the
CPU usage anymore, but the network-bandwidth.
We had to switch to 1GBit-network-stuff, to bring the
Appserver to its CPU-limits (ca. 34MByte/sec then
on the GBit-Network-adapter whilst serving larger
resultsets under full CPU-load).
With an 100MBit-connection the server was reaching
only 30% CPU-load, not getting any higher (but the network-
adapter was reaching its limits with ca. 10-11MByte/sec
which is the maximum on 100MBit-Hardware).

So as said, the multiple readers scenario works very good.

Regarding "full-DB-locks" whilst writing...
That sounds "a bit frightening" at the first look, but since
the insert-performance of sqlite is also blazing fast (e.g.
ca. 100000 records per second for "a 6 mixed columns-
insert", using direct binding and wrapped in a transaction)
this is also a "no issue" in my opinion, if you design your
client-apps in a way, that your DB-Updates/Inserts
are done in a "granular" way - meaning more but smaller
transactions are better than single "mega-transactions".

To bring a concrete example:
A client sends an Insert-Job with 100 new Records whilst
the DB-Server is under full "read-stress". From the (ca.) insert-
performance-values I've mentioned above, the DB would
be blocked for only ca. 1-2msec (delivering SQLite_Busy to
the Readers) whilst inserting the 100Records within a
(Begin Immediate) transaction.

Hope that helps, that you get an impression what this engine
is good for in a multiuser-scenario. For me it is by far
underrated in regards to these concurrent scenarios - try to
find a good (threadpool-based) server-implementation
(or write your own) and try it yourself.

Olaf Schmidt



_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to