Right now our data processing code is roughly split between internal CPU
side computations, then binding that data and streaming it into a SQL
database. We're doing this synchronously right now but could get
significant overlap if the SQL operations were running in a separate
thread.

The background thread implementation is fairly straightforward (we've
already done it with some other software), however this means we'd move
from one prepared statement to potentially hundreds or even thousands.

We could do this by keeping a 'template' of the prepared statement around
and duplicating it every time we queue to the background thread, but I'm
unclear if the overhead of this is significant enough that we'd be better
off making a 'prepared statement pool' instead?

Any advice would be appreciated.

Thanks,

Brian
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to