On 2/23/20 8:31 AM, Olaf Schmidt wrote:
Am 21.02.2020 um 02:24 schrieb Chip Beaulieu:
I have a table with 4.5 million records with full text indexing.  > Reads are very fast, but deleting / inserting / updating takes on average about 50 seconds per record. I often do batches of 30,000 deletes / inserts at a time. The last batch took 10 hours to complete.

These 30,000 deletes+inserts indicates that these were the
verse-records of a complete Bible-Content, right?

And your 4.5Mio records total indicate, that you have about
140 different Bible-versions in your DB?

I suspect it’s got something to do with the triggers more than the indexes.
Could be - but my guess is, that your primary DB-changes are caused
by "deleting or inserting verses of whole bible-contents".

So why not handle (hold) each bible in a separate table?

An amount of 140 tables in such a "BibleVersions.db" is not
uncommon and can be managed by SQLite in a good performance.

I'm not sure that form of division would be good. One basic rule of database normalization is that you don't break-up data based on the value of one of the fields because you can't make the table to lookup data from be taken from a field you get in a query.

--
Richard Damon

_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to