I ran some more tests with my main DB converted to sqlite, using weectl to run repots using the default Seasons skin.
The times remained basically constant at about 4.5 minutes user+sys CPU as I kept deleting old records. As I got below 500k records the times dropped proportionally to the number of archive records, extrapolating back to about 20 seconds for no records (which is still a crazy long time. In addition the various gaps between the file creation times also dropped in proportion. The bit I wrote in the previous post about memory use did not apply to this set of tests, as the value just stuck steadily to 120MB. I think I have lost track of some of the tests I was doing. On Tuesday 26 December 2023 at 6:04:05 pm UTC+10 Cameron D wrote: > The wmr300 DB has 3.7million records, and the ecowitt DB has only 530k. > The system is a VM running on an i5 - the VM is allocated 8GB ram and 4 > cores. It was not stressed at all by weewx V4. > > The CPU usage is all in the python, not in the mariadb server. > > While the python code is thrashing around achieving nothing, the memory > allocation for the report script is oscillating from under 200MB to 780M - > no swapping, the machine still has 4GB free. > > On Tuesday 26 December 2023 at 4:11:59 pm UTC+10 Vince Skahan wrote: > >> Cameron several of us have run v5 with very large db of over 10 years >> data on pi4 or lesser boxes without such issue, so a bit more data from you >> would be helpful. How big a size are you running ? On what hardware ? >> >> -- You received this message because you are subscribed to the Google Groups "weewx-development" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/weewx-development/2db613f4-31bb-4621-ab3c-896d8c59ac58n%40googlegroups.com.
