Interesting. I don't know why the ScalarStats would, ... well, accumulate,
more than any more type. It doesn't hold any back references to anything,
so the garbage collector should have an easy time cleaning up after them.

Your instrumentation is a useful one, but remember that, by default, the gc
runs irregularly. Weewx explicitly runs it every 3 hours. So it's possible
that the gc just hasn't gotten around to collecting the old ScalarStats.
How about running it directly before each snapshot?

As an experiment, you can also try deleting the old accumulators in the
engine (patch below), but I don't see why that would make a difference.

diff --git a/bin/weewx/engine.py b/bin/weewx/engine.py
index 0e3777c..8ed66bb 100644
--- a/bin/weewx/engine.py
+++ b/bin/weewx/engine.py
@@ -600,6 +600,9 @@
         # Set the time of the next break loop:
         self.end_archive_delay_ts = self.end_archive_period_ts +
self.archive_delay

+        # Delete the old accumulator
+        del self.old_accumulator
+
     def new_archive_record(self, event):
         """Called when a new archive record has arrived.
         Put it in the archive database."""

-tk


On Sun, Dec 18, 2016 at 6:26 PM, mwall <[email protected]> wrote:

> sorry in advance for the lack of analysis on this, but here are some raw
> data from the heap.
>
> this is the heap after initial startup, at the first archive interval.
> note the total size.
>
> 2016-12-18 20:52:44 heap:
> Partition of a set of 85482 objects. Total size = 10696752 bytes.
>  Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
>      0  42576  50  3643128  34   3643128  34 str
>      1  19582  23  1705192  16   5348320  50 tuple
>      2    686   1   800336   7   6148656  57 dict (no owner)
>      3   5294   6   677632   6   6826288  64 types.CodeType
>      4    212   0   651488   6   7477776  70 dict of module
>      5   5156   6   618720   6   8096496  76 function
>      6    669   1   601032   6   8697528  81 type
>      7    669   1   590520   6   9288048  87 dict of type
>      8    217   0   228184   2   9516232  89 dict of class
>      9    946   1   128976   1   9645208  90 list
> <233 more rows. Type e.g. '_.more' to view.>
>
> what follows are the *differences* in the heap at each subsequent archive
> interval.  these should be heap differences - the service does a
> setrelheap() after it prints the heap.  after the initial hiccup of almost
> 13M, it looks like there is about 124K of memory added to the heap on each
> archive interval.  most of that is in dicts of weewx.accum.ScalarStats.
>
> 2016-12-18 20:55:18 heap:
> Partition of a set of 885 objects. Total size = 278816 bytes.
>  Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
>      0    166  19   173968  62    173968  62 dict of
> weewx.accum.ScalarStats
>      1     11   1    59912  21    233880  84 dict (no owner)
>      2    166  19    10624   4    244504  88 weewx.accum.ScalarStats
>      3    377  43     9048   3    253552  91 float
>      4      2   0     6736   2    260288  93 weewx.accum.Accum
>      5      6   1     2840   1    263128  94 types.FrameType
>      6     16   2     2240   1    265368  95 list
>      7      2   0     2096   1    267464  96 dict of weewx.accum.VecStats
>      8     26   3     1944   1    269408  97 str
>      9     59   7     1416   1    270824  97 int
> <30 more rows. Type e.g. '_.more' to view.>
> 2016-12-18 21:00:18 heap:
> Partition of a set of 37635 objects. Total size = 12959656 bytes.
>  Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
>      0   5122  14  8857632  68   8857632  68 unicode
>      1  12042  32  1297376  10  10155008  78 str
>      2    489   1   512472   4  10667480  82 dict of Cheetah.Compiler.
> MethodCompiler
>      3   4419  12   455704   4  11123184  86 list
>      4   4145  11   398176   3  11521360  89 tuple
>      5    665   2   366680   3  11888040  92 dict (no owner)
>      6   6485  17   155640   1  12043680  93 int
>      7     41   0   112088   1  12155768  94 dict of module
>      8    862   2   110336   1  12266104  95 types.CodeType
>      9    872   2   104640   1  12370744  95 function
> <69 more rows. Type e.g. '_.more' to view.>
> 2016-12-18 21:05:18 heap:
> Partition of a set of 635 objects. Total size = 123864 bytes.
>  Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
>      0     83  13    86984  70     86984  70 dict of
> weewx.accum.ScalarStats
>      1    365  57     8760   7     95744  77 float
>      2      4   1     7264   6    103008  83 dict (no owner)
>      3     83  13     5312   4    108320  87 weewx.accum.ScalarStats
>      4      1   0     3368   3    111688  90 weewx.accum.Accum
>      5      6   1     3176   3    114864  93 types.FrameType
>      6      4   1     1120   1    115984  94 dict of threading._Condition
>      7     46   7     1104   1    117088  95 int
>      8      1   0     1048   1    118136  95 dict of
> user.forecast.ForecastThread
>      9      1   0     1048   1    119184  96 dict of weewx.accum.VecStats
> <17 more rows. Type e.g. '_.more' to view.>
> 2016-12-18 21:10:18 heap:
> Partition of a set of 633 objects. Total size = 123528 bytes.
>  Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
>      0     83  13    86984  70     86984  70 dict of
> weewx.accum.ScalarStats
>      1    363  57     8712   7     95696  77 float
>      2      4   1     7264   6    102960  83 dict (no owner)
>      3     83  13     5312   4    108272  88 weewx.accum.ScalarStats
>      4      1   0     3368   3    111640  90 weewx.accum.Accum
>      5      6   1     2888   2    114528  93 types.FrameType
>      6      4   1     1120   1    115648  94 dict of threading._Condition
>      7     46   7     1104   1    116752  95 int
>      8      1   0     1048   1    117800  95 dict of
> user.forecast.ForecastThread
>      9      1   0     1048   1    118848  96 dict of weewx.accum.VecStats
> <17 more rows. Type e.g. '_.more' to view.>
> 2016-12-18 21:15:18 heap:
> Partition of a set of 667 objects. Total size = 127416 bytes.
>  Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
>      0     83  12    86984  68     86984  68 dict of
> weewx.accum.ScalarStats
>      1    372  56     8928   7     95912  75 float
>      2      5   1     7544   6    103456  81 dict (no owner)
>      3     83  12     5312   4    108768  85 weewx.accum.ScalarStats
>      4      1   0     3368   3    112136  88 weewx.accum.Accum
>      5      6   1     3328   3    115464  91 types.FrameType
>      6      2   0     2096   2    117560  92 dict of
> user.forecast.ForecastThread
>      7      6   1     1680   1    119240  94 dict of threading._Condition
>      8     51   8     1224   1    120464  95 int
>      9      1   0     1048   1    121512  95 dict of weewx.accum.VecStats
> <17 more rows. Type e.g. '_.more' to view.>
> 2016-12-18 21:20:18 heap:
> Partition of a set of 648 objects. Total size = 124472 bytes.
>  Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
>      0     83  13    86984  70     86984  70 dict of
> weewx.accum.ScalarStats
>      1    369  57     8856   7     95840  77 float
>      2      4   1     7264   6    103104  83 dict (no owner)
>      3     83  13     5312   4    108416  87 weewx.accum.ScalarStats
>      4      1   0     3368   3    111784  90 weewx.accum.Accum
>      5      6   1     2920   2    114704  92 types.FrameType
>      6     48   7     1152   1    115856  93 int
>      7      4   1     1120   1    116976  94 dict of threading._Condition
>      8      1   0     1048   1    118024  95 dict of
> user.forecast.ForecastThread
>      9      1   0     1048   1    119072  96 dict of weewx.accum.VecStats
> <18 more rows. Type e.g. '_.more' to view.>
>
> --
> You received this message because you are subscribed to the Google Groups
> "weewx-user" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"weewx-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to