On Tue, Sep 10, 2019 at 05:25:38PM +0200, mailing lists wrote:
> Hi,
> 
> I cannot really put all the inserts into one transaction because in case of a 
> failure I loose all the already inserted data. Though I made some tests. 
> There is hardly any performance gain anymore when doing 1000 or 10 000 
> insertions in one transaction including immediate insertion into indices (in 
> my case the difference is in the per cent range).

What do you mean "to loose data"? Do you need them to be immediately available 
via SQL, or just written to persistent storage? In the latter case you can 
implement your own data cache, like sequentilal log files, which will be 
periodically (and/or on demand) rotated, and afterwards asynchronously parsed, 
inserted into the SQLite database with optimized CACHE_SIZE, transaction size,
journal mode etc, and deleted only after successfull commit. Thus you shift the 
burden from SQL to filesystem which is less limited by natural data structure 
and might perform better.

Valentin Davydov.

_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to