> Is it OK nowadays to say always aim to be fully normalised - modern > RDBMSs are usually powerful enough to cope with most anything?
I suggest that aiming for fully normalised databases is the way to go, for the traditional and still relevant reasons of data integrity, file size, predictability, and generally only having to make data changes once. AFAIK database engines such as SQLite are optimised to cope with joins without suffering slowdowns and may even be faster, depending on the schema and data set. For instance an integer primary key if more efficiently searched than text. > what people would say is normalising is the easy part; the skill > comes in 'collapsing' I would agree that "collapsing" the normalized data into something humanly readable if probably the biggest difficulty. Joining tables to show related data is fairly trivial for an database designer worth their salt, but it can be difficult to facilitate updates to that data, where user input requires changing several related tables with related values. I suggest that more fully exploiting "views" makes this easier to set up and maintain. In particular, using "instead of" triggers on the views greatly simplifies the design while shifting the burden of data integrity to the SQL engine, rather than creating pages of code in your own application. "Instead of" triggers cope with inserts, deletes and updates in a view, passing on the actions as desired to the underlying tables or raising exceptions for disallowed data etc. Your application (or even SQL command line) can then facilitate user changes to data in the views in the same way that it would to a denormalised table. It is fully portable from one application to another. Tom BareFeet -- Comparison of SQLite GUI applications: http://www.tandb.com.au/sqlite/compare/?ml _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users