On 2/23/20 3:06 PM, R.Smith wrote:
On 2020/02/23 21:23, Richard Damon wrote:
On 2/23/20 8:31 AM, Olaf Schmidt wrote:
An amount of 140 tables in such a "BibleVersions.db" is not
uncommon and can be managed by SQLite in a good performance.
I'm not sure that form of division would be good. One basic rule of
database normalization is that you don't break-up data based on the
value of one of the fields because you can't make the table to lookup
data from be taken from a field you get in a query.
It is far less of a worse design than having an AFTER DELETE trigger
doing trivial processing when you expect to delete 30K records at a
time due to a "well-designed" 1NF schema.
Firstly, good design guides are only guides.
Secondly, having said that, either embrace the full idealism or none
of it, but doing some things the good way and others not, hampers the
"goodness" of the overall system.
Lastly, we usually use ON DELETE CASCADE Foreign Key relations to do
what you are doing with the trigger, but of course you are using FTS5
and I do not know if it can be done the FK way, or indeed how much the
FK way will be better considering the same Indexes remain.
Note - I don't have a good solution here, if it was me I would
immediately kill those triggers, see if it can be done smarter (or is
needed at all), or even if it can be done in code - OR - get rid of
the 1NF schema and go with the suggested Bible-per-Table method and
keep the triggers (This will help nothing if the deletes are going to
require deleting 1 record (and its trigger) each through 30K tables.
It's only useful if it brings down the delete count).
If you cannot live without the triggers, nor can bring down the
delete-count, maybe do it in code, or one quick way I can think of is
using the ON DELETE to add those IDs to a "to_be_deleted" table and
simply delete everything fropm the second table with "WHERE ID IN
(SELECT ID FROM to_be_deleted)" before comitting the transaction,
perhaps combined with dropping and re-making the Indexes - but of
course this needs testing and might not be suitable for reasons I am
unaware of in your use case.
Either way, you are probably in for some lengthy processing.
Best of luck!
Ryan
My thought is that a modification (making multiple tables, one per
version) that drops significant functionality (now version is hard coded
as part of the query, and can't be determined as part of a query) is
something not to lightly do. It also isn't that likely to improve
things a lot unless it allows the massive delete cycle to be replaced
with a drop table.
One big question is why the apparent wholesale deletion of a version,
and if really needed perhaps a better schema that supports the
operations better.
--
Richard Damon
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users