Jonas Borgström wrote:
> Christian Boos wrote:
>   
>> Jonas Borgström wrote:
>>     
>>> ... Four consecutive
>>> "/report/1" requests gives the following output:
>>>
>>> before render(): 51888 kB
>>> after render(): 209456 kB
>>> before render(): 208936 kB
>>> after render(): 221792 kB
>>> before render(): 222040 kB
>>> after render(): 222128 kB
>>> before render(): 222128 kB
>>> after render(): 222128 kB
>>>
>>>       
>>> And since python 2.5.2 never seems to return "free" and garbage
>>> collected memory back to the operating system this memory is bound to
>>> this particular process until it is restarted.
>>>
>>>       
>> No, Python 2.5.2 is able to do that. If instead of looking only at
>> before/after memory usage, you monitor the memory used while processing
>> the request, you'll see that there's a peak usage (which I estimate at
>> around 350MB, given your numbers above are close to mine). So that's
>> 130MB returned to the system. That's not enough I agree, but it's
>> probably not because Python can't give back memory to the system, rather
>> because there are some leaks which prevent the memory to be freed.
>>     
>
> Are you sure? If I monitor the process after it has been idle for 
> several minutes the command "top" shows the resident memory usage as 
> 220+MB. So as far as I can tell nothing is returned back to the OS.
>
> Which OS and Python version did you test this on?
>   

Yeah, I didn't say the memory was release after some time, I said that 
the memory usage you witnessed was probably more like:

...

before render(): 209MB
*while rendering:* 350MB
after render(): 209MB

(for that, I use some very advanced memory monitoring tool, :-)

monitor() { while true; do cat /proc/$1/status | grep Vm; sleep 1; 
clear; done ; }

I test with Linux 64-bits (SLES9), Python 2.5.1.


>>> This is bad news for people running multiple long running processes
>>> since after a while all processes will have allocated enough ram for the
>>> most expensive page.
>>>
>>> So far I've been unable to find a way to minimize the memory usage of
>>> genshi's serialization step. But unless we find any way to do that I'm
>>> afraid we might have to consider paging or some other technique to avoid
>>> these expensive pages.
>>>
>>>       
>> There is a good patch for report paging (#6127 / #216), so I'll move
>> that up in the queue.
>> But on the other hand, there might well be a report specific issue (or a
>> Genshi issue uncovered by the report_view.html template), as I also have
>> very big numbers (memory and time) for the reports in my tests.
>>     
>
> For what it's worth /query and /timeline also have large memory 
> footprints for larger result sets.
>
>   

Point taken. There are also patches for timeline paging, but I'm not 
sure about them yet.

-- Christian

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Trac 
Development" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/trac-dev?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to