On 2018/07/29 11:47 AM, Rob Willett wrote:
John,

Thanks for the prompt reply and a very good question..

We've dropped the indexes and the commit is now very quick, approx two seconds

However the overall performance of the run is much the same as other areas of the code are now significantly slower, whereas before they were quick.

Where were you going with that question?

Possibly you have too many indexes.

An Index is an expensive thing to maintain for a DB, it's only ever useful when the improvement gained for look-ups significantly outweighs the cost of updating the Indexes.

The typical process here is to remove all Indexes, then add them back one by one and run all queries, noting which adds benefit and which not, then when all are installed, remove them in the same order (i.e. if you added A then B then C... start removing A then B then C also) and measure again, you will quickly find the useless Indexes.

This is the very last step in design though, it's the kind of optimization everyone talks about when they warn against "premature optimization". A prior step would be to study the queries and see if you can find better Indexes, or ones that covers (i.e. is helpful with) a wider range of queries, etc.

Apart from all that... did you add any triggers since the big DB? Which thing is re-forming the previously "big" sets of data records into the new streamlined set? If this is a Trigger or UDF, does that not eat any time?


Cheers,
Ryan

_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to