-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.


On Tuesday, 6 August, 2019 04:35, test user <example.com.use...@gmail.com> 
wrote:

>So in summary, there is no difference in the multi threaded
>performance that can be gained between SERIALIZED and MULTITHREADED 
>(aside from the mutex overhead)? The only difference is SERIALIZED 
>enforces correct usage at a small overhead cost?

That is my understanding, yes.  However, the cost of 
obtaining/checking/releasing a MUTEX may vary by OS implementation so the 
definition of "small" depends on the underlying implementation.

>So for example, if I had:

>- 8 cores
>- 8 threads
>- 8 db connections, 1 per thread
>- 1 database file
>- x amount of read requests per second

>If I were to load balance x requests over each of the 8 threads, all
>the reads would complete concurrently when in SERIALIZED mode, with WAL
>enabled?

Yes.  Within the abilities of the OS to concurrently perform any required I/O 
(including I/O from the cache).  That is that as far as the library is 
concerned they would all operate (compute) in parallel.  Whether or not 
something else (at the OS or hardware level) may impose overheads or 
serialization is not something that user code can guarantee.

>Assume other bottlenecks in the system are not an issue (like disk
>speed).

>Im just trying to confirm that SERIALIZED will not queue up requests
>for (1 file, multiple connections to that file, read only requests).

No, SERIALIZED serializes requests for CPU access to shared data located in 
memory to prevent concurrent access to that data by multiple threads.  Once the 
mutex is obtained any further serialization is an OS concurrency issue handled 
by the OS.

Be aware, however, that things like using SHARED_CACHE may have extra 
serialization between connections to the same shared cache to prevent 
simultaneous access to the cache data structures by different threads running 
on different connections, even though those connections do not have multiple 
simultaneous call contention on their own.

So basically, if you executed 8 SELECT statements, each on a separate 
connection, each on a separate core, with no I/O limitations (that is, the 
entire database was contained in the OS block cache), and no memory 
limitations, and each connection using its own cache per connection (no shared 
cache) you should expect the limiting factor to be CPU only.  You will note 
that in that scenario there is not really much difference between using one 
process with 8 threads and one connection per thread, one thread per core each 
executing one SELECT each, and 8 processes each having one thread executing the 
same select statements one per process where each process is being dispatched 
to each core.  Except any overhead that the OS might impose for thread/process 
handling, that is.





_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to