Garry,
I ran your test extension for a bit in my Ubuntu VM.  Doesn't appear to be 
leaking.
 16:42:24 {'mem_size': 338.53515625, 'mem_rss': 32.953125, 'mem_share': 
11.28125}
 16:47:38 {'mem_size': 423.80859375, 'mem_rss': 47.75, 'mem_share': 
11.796875}
 16:52:53 {'mem_size': 424.4140625, 'mem_rss': 48.2265625, 'mem_share': 
11.796875}
 16:57:24 {'mem_size': 426.46875, 'mem_rss': 50.19921875, 'mem_share': 
11.796875}
 17:02:32 {'mem_size': 426.46875, 'mem_rss': 50.21875, 'mem_share': 
11.796875}
 17:07:20 {'mem_size': 427.1796875, 'mem_rss': 50.9921875, 'mem_share': 
11.796875}
 17:12:27 {'mem_size': 427.1796875, 'mem_rss': 50.9921875, 'mem_share': 
11.796875}
 17:17:26 {'mem_size': 427.1796875, 'mem_rss': 50.9921875, 'mem_share': 
11.796875}
 17:22:24 {'mem_size': 427.1796875, 'mem_rss': 50.9921875, 'mem_share': 
11.796875}
 17:27:19 {'mem_size': 424.6640625, 'mem_rss': 48.71875, 'mem_share': 
11.796875}
 17:32:27 {'mem_size': 424.6640625, 'mem_rss': 48.71875, 'mem_share': 
11.796875}
 17:37:17 {'mem_size': 424.5859375, 'mem_rss': 48.7109375, 'mem_share': 
11.796875}
 17:42:21 {'mem_size': 424.58203125, 'mem_rss': 48.70703125, 'mem_share': 
11.796875}
 17:47:24 {'mem_size': 424.984375, 'mem_rss': 49.140625, 'mem_share': 
11.796875}
 17:52:21 {'mem_size': 424.984375, 'mem_rss': 49.140625, 'mem_share': 
11.796875}
 17:57:15 {'mem_size': 425.05078125, 'mem_rss': 49.26953125, 'mem_share': 
11.796875}
 18:02:11 {'mem_size': 425.05078125, 'mem_rss': 49.26953125, 'mem_share': 
11.796875}
 18:07:13 {'mem_size': 425.1171875, 'mem_rss': 49.3359375, 'mem_share': 
11.796875}
 18:12:14 {'mem_size': 425.1171875, 'mem_rss': 49.3359375, 'mem_share': 
11.796875}
 18:17:12 {'mem_size': 425.21875, 'mem_rss': 49.4375, 'mem_share': 
11.796875}
 18:22:10 {'mem_size': 425.09765625, 'mem_rss': 49.31640625, 'mem_share': 
11.796875}
 18:27:13 {'mem_size': 425.0390625, 'mem_rss': 49.2578125, 'mem_share': 
11.796875}
 18:32:15 {'mem_size': 425.0390625, 'mem_rss': 49.2578125, 'mem_share': 
11.796875}
 18:37:15 {'mem_size': 425.03125, 'mem_rss': 49.25, 'mem_share': 11.796875}
 18:42:24 {'mem_size': 425.01953125, 'mem_rss': 49.2578125, 'mem_share': 
11.796875}
 18:47:14 {'mem_size': 425.0078125, 'mem_rss': 49.24609375, 'mem_share': 
11.796875}
 18:52:13 {'mem_size': 425.0078125, 'mem_rss': 49.24609375, 'mem_share': 
11.796875}
 18:57:15 {'mem_size': 425.61328125, 'mem_rss': 49.8515625, 'mem_share': 
11.796875}
 19:02:19 {'mem_size': 425.0078125, 'mem_rss': 49.2734375, 'mem_share': 
11.796875}

When I have some time, I'll install on my test pi along with the 
Belchertown skin and Vince's mem extension and see if I see anything.
Yes, I enjoy a good memory 'puzzle'.
rich
On Saturday, 9 January 2021 at 16:52:08 UTC-5 garrya...@gmail.com wrote:

> Many, many thanks!
>
> Before I wrote the service to investigate the problem, I wrote a 
> standalone program to eliminate all WeeWX components.  It did NOT exhibit 
> the problem.
>
> Problem only seems to show up under WeeWX/Belchertown *and* if the include 
> file changes / is recreated.
>
> Anyway, I really appreciate you looking into my problem.  I’m actively 
> working on a skin-independent service to read many RSS/Atom feeds and will 
> utilize your suggestions above.
>
> Regards,
>
> Garry Lockyer
> C: +1.250.689.0686 <(250)%20689-0686>
> E: ga...@lockyer.ca
>
>
> On Jan 9, 2021, at 12:39, bell...@gmail.com <bell...@gmail.com> wrote:
>
> 
>
> Garry,
> I took your WxFeedsMemory.py and made some modifications.
> First, I changed the logging to simple print calls. this was just a 
> convenience for me to easily see what was going on.
> Second, I stole some code from Vince’s mem extension, 
> https://github.com/vinceskahan/vds-weewx-v3-mem-extension. I did this 
> because I am used to the data it returns. (Note, I highly recommend this 
> when debugging memory.)
> Third, I added code so I could call your extension repeatedly in a loop.
>
> When I called new_archive_record 100 times there was a slight increase in 
> memory, but nothing alarming. But it was slow, approximately 2 minutes a 
> call.
>
> So, I commented out the flush and fsync calls and called 
> new_archive_record 10,000 times. The slight increase seemed to stop around 
> 6,000 calls.
>
> I’m wondering if the amount of time it takes to run is causing problems 
> when run as a service. I plan to uncomment the code and run it as an 
> extension. I’ll let you know what I see.
> rich
>
> On Thursday, 7 January 2021 at 13:00:40 UTC-5 garrya...@gmail.com wrote:
>
>> After many hours of troubleshooting and testing, I think I have an idea 
>> what's happening.
>>
>> Background: I want to use weewx with the Belchertown skin and my 
>> extension that reads numerous RSS/Atom feeds  on a Pi Zero, with either a 
>> Davis Pro V2 or Weatherflow weather station.  I will eventually have four 
>> weather stations in my area.  I thought I had things working reasonably 
>> well so deployed one station.  It stopped working after about 3 days.  I 
>> can't access that system remotely so I built a very similar system (the 
>> next one to deploy) and it falso ailed due to lack of memory.  Weewx memory 
>> usage on my development system also increases over time - it can grow to 
>> over 1 GB in a few hours!
>>
>> My extension was an extension to Belchetown based on the METAR 
>> extension.  The extension created Belchertown index hook include files each 
>> archive cycle.  While researching this problem I re-read the weewx 
>> customization guide and noted that extensions should NOT be dependent on 
>> other extensions so I decided to re-write my extension as a service (which 
>> looks like a much cleaner solution) without any dependencies on Belchertown 
>> (other than include file names).  All I've done so far is create a service 
>> (WxFeedsMemoryTest.py) to test weewx/Belchertown memory consumption.
>>
>> The Problem: weewx/Bechertown memory usage increases over time.  It 
>> starts out at about 45 MB and grows at about 3MB per archive period/cycle 
>> (using my test case).  A 512 MB Pi will exhaust memory within a few days.
>>
>> It appears that the problem is associated with the creation of the 
>> Belchertown include files while weewx/Belchertown is running:
>>
>> - if the include file is 'static' as in not (re)created while 
>> weewx/Belchertown is running, memory usage is static - it does not grow 
>> beyond about 50 MB.
>>
>> - if the include file is 'dynamic' as in (re)created while 
>> weewx/Belchetown is running, memory usage increases.
>>   - if the include file is created once, and becomes 'static', memory 
>> usage increases and then stabilizes.
>>   - if the include file is recreated continuously (such as on each 
>> archive cycle), memory usage increases each cycle.
>>
>> It does not appear to matter if the include file is created directly, or 
>> created as a temporary file and then copied or renamed.
>>
>> The attached service (WxFeedsMemoryTest.py) can be used to demonstrate 
>> the problem.  Please see installation and use instructions within the 
>> WxFeedsMemoryTest.py.
>>
>> I'm going to continue to work on moving my extension from "an extension 
>> to an extension" to a service in the hope that this memory problem can be 
>> resolved.
>>
>> With apologies in advance if I'm doing something to cause the problem, 
>> please review, advise and let me know what I can do to avoid the problem.
>>
>> Regards,
>>
>> Garry
>> On Thursday, December 31, 2020 at 7:05:15 PM UTC-8 garrya...@gmail.com 
>> wrote:
>>
>>> Got MemoryError after about 9 hours after restart.  Have removed cmon by 
>>> commenting out any mention of cmon in weewx.conf and restarted.
>>>
>>> Regards,
>>>
>>> Garry Lockyer
>>> Former DEC Product Support Engineer :^)
>>> Kepner-Tregoe Trained :^))
>>> C: +1.250.689.0686 <(250)%20689-0686>
>>> E: ga...@lockyer.ca
>>>
>>>
>>> On Dec 31, 2020, at 11:44, vince <vince...@gmail.com> wrote:
>>>
>>> On Thursday, December 31, 2020 at 11:39:39 AM UTC-8 garrya...@gmail.com 
>>> wrote:
>>>
>>>> Re: editing the Belchertown skin, nope haven’t touched it, *other than 
>>>> interfacing with it via the include files (as generated by my 
>>>> BelchertownWxFeeds extension)*.  When all the endpoints (for testing) 
>>>> are enabled index.html is about 1.8MB, so perhaps that’s causing the 
>>>> problem.  I can easily reduce / eliminate endpoints and prefer to do that 
>>>> before eliminating the Belchertown skin.
>>>>
>>>>
>>> There it is.  You touched it :-)
>>>
>>> Usual debugging rules apply.   Reset it to a baseline unmodified config. 
>>>  Add in changes one-by-one.  If it goes sideways, revert to the last known 
>>> good and reverify that it stays good.
>>>  
>>>
>>> -- 
>>>
>>> You received this message because you are subscribed to the Google 
>>> Groups "weewx-user" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to weewx-user+...@googlegroups.com.
>>>
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/weewx-user/57a5bfc7-d0b4-4419-b178-6342564642edn%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/weewx-user/57a5bfc7-d0b4-4419-b178-6342564642edn%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>> -- 
> You received this message because you are subscribed to the Google Groups 
> "weewx-user" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to weewx-user+...@googlegroups.com.
>
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/weewx-user/d5e41e91-5dab-41bc-bc34-682826ba4753n%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/weewx-user/d5e41e91-5dab-41bc-bc34-682826ba4753n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"weewx-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to weewx-user+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/weewx-user/9d827197-21fe-4238-86a1-58bd001d4b66n%40googlegroups.com.

Reply via email to