Yes, if they are lock bound, then they need to have the number of cores
which reduces the locking overhead to the point where it's not degrading
performance too much.  Though I guess the OP really didn't say that (more
CPUs may spend more time in spinlocks and still spend less wallclock time).

Another thing to look at it whether any queries can be more effectively
scheduled.  Having hundreds of completely-unrelated queries seems unlikely
to me.  More likely is that you have a smaller number of queries which are
targeting various different bind parameters.  Preparing a particular query
once, then looping and running each set of bind parameters on one thread is
probably going to be _much_ more efficient.

-scott


On Fri, Mar 3, 2017 at 5:03 PM, Warren Young <war...@etr-usa.com> wrote:

> On Mar 3, 2017, at 5:51 PM, Keith Medcalf <kmedc...@dessus.com> wrote:
> >
> > No, the good rule of thumb is to allocate one thread per CPU.
>
> It depends on the workload.  Parallel make (e.g. “make -jN” in GNU make)
> typically improves in speed past N=core count to about 1.5x the core count.
>
> SQLite seems like a similar kind of workload: lots of CPU *and* disk I/O,
> so that you need a bit of oversubscription to keep all the cores busy,
> because some threads/processes will be stalled on I/O.
>
> Not that any of this is relevant at the current point, since the OP is
> currently neither I/O bound nor CPU-bound, but lock-bound.
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to