On Jul 22, 2009, at 1:48 PM, Shane Caraveo wrote:

>
> On 7/22/09 6:08 AM, Christian Boos wrote:
>> Hello Shane,
>>
>> First, great job at looking into the bowels of Trac ;-)
>> Then, as a general comment, I see that some of your suggestions  
>> actually
>> go against some of the changes I did in #6614, so we have not
>> surprisingly a trade-off of memory vs. speed. In some environments  
>> where
>> memory is strictly restricted, we have no choice but to optimize for
>> memory, at the detriment of speed. But in most environments, the  
>> extra
>> memory needed for achieving decent speed might be affordable. So I  
>> see
>> here the opportunity for a configuration setting, something like  
>> [trac]
>> favor_speed_over_memory, defaulting to true, that people having low
>> resources could turn off.
>
> For the gc.collect item, I think it should be a configurable  
> background
> thread, rather than happening in the request loop.  I've been  
> meaning to
> explore the memory use with the source browsing and timeline so I can
> understand what is happening there, but haven't got around to it.  For
> the encode loop, I was hoping that sending the output straight through
> would be a gain, I think there still might be some opportunity around
> that idea.
>
> Another thought, again from a background thread, monitor memory usage
> and gc.collect at some threshold.  Low memory environments will end up
> doing this more often.
>
[snip]
>>> == General ==
>>>
>>> In general there are only a couple big wins.  For me it was removing
>>> gc.collect (see trac.main) and the timing and estimation plugin.
>>> Everything else was small potatoes in comparison (10ms here, 5ms  
>>> there),
>>> but together they added up to a good 40-50ms per request.  Think  
>>> of it
>>> this way, using 100%cpu and 50ms/request limits you to a max of 20
>>> requests/second/cpu.  Every ms counts if we want decent  
>>> throughput.  I'd
>>> like to get under 30ms.
>>>
>>
>> The gc on every request is the typical memory vs. speed trade-off.  
>> If it
>> can be shown that despite not doing gc after every request, the  
>> memory
>> usage stays within bound, then I think we can make that optional.  
>> As you
>> said elsewhere, it's quite possible that this explicit gc simply  
>> hides a
>> real memory leak that can be avoided by other means (like fixing  
>> the db
>> pool issue with PostgreSQL).
>
> out of site, out of mind ;)

Why should trac even use the gc manually? The only answer would be to  
free db pool conections, and that is better fixed by itself (but I  
dunno if it is possible). Leaking memory is not going to be fixed by  
calling gc.collect. The garbage collector automatically does a  
gc.collect and its parameters can be set on a trac.wsgi script if some  
user really needs it.

I really would like to understand why trac calls it at every request.

--
Leonardo Santagada
santagada at gmail.com


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Trac 
Development" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/trac-dev?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to