I did a test - I disabled the SpamAssassin integration and watched the heap
grow steadily - I do not believe its SA related:

[EMAIL PROTECTED]:mailman-2.1.9 10:51pm 68 # pmap 22804 | egrep heap
08175000   14060K rwx--    [ heap ]
[EMAIL PROTECTED]:mailman-2.1.9 10:51pm 69 # pmap 22804 | egrep heap
08175000   16620K rwx--    [ heap ]
[EMAIL PROTECTED]:mailman-2.1.9 10:52pm 70 # pmap 22804 | egrep heap
08175000   16620K rwx--    [ heap ]
[EMAIL PROTECTED]:mailman-2.1.9 10:53pm 75 # pmap 22804 | egrep heap
08175000   18924K rwx--    [ heap ]
[EMAIL PROTECTED]:mailman-2.1.9 10:54pm 81 # pmap 22804 | egrep heap
08175000   19692K rwx--    [ heap ]
[EMAIL PROTECTED]:mailman-2.1.9 10:55pm 82 # pmap 22804 | egrep heap
08175000   19692K rwx--    [ heap ]

Trying to find a way to look at the contents of the heap or at least limit
its growth.
Or is there not a way expire & restart mailman processes analogous to the
apache httpd process expiration (designed to mitigate this kind of resource
growth over time)?

thanks

On 7/1/08 9:58 PM, "Brad Knowles" <[EMAIL PROTECTED]> wrote:

> On 7/1/08, Mark Sapiro wrote:
> 
>>  In this snapshot
>> 
>>    PID USERNAME LWP PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
>>  10123 mailman    1  59    0  314M  311M sleep    1:57  0.02% python
>>  10131 mailman    1  59    0  310M  307M sleep    1:35  0.01% python
>>  10124 mailman    1  59    0  309M   78M sleep    0:45  0.10% python
>>  10134 mailman    1  59    0  307M   81M sleep    1:27  0.01% python
>>  10125 mailman    1  59    0  307M   79M sleep    0:42  0.01% python
>>  10133 mailman    1  59    0   44M   41M sleep    0:14  0.01% python
>>  10122 mailman    1  59    0   34M   30M sleep    0:43  0.39% python
>>  10127 mailman    1  59    0   31M   27M sleep    0:40  0.26% python
>>  10130 mailman    1  59    0   30M   26M sleep    0:15  0.03% python
>>  10129 mailman    1  59    0   28M   24M sleep    0:19  0.10% python
>>  10126 mailman    1  59    0   28M   25M sleep    1:07  0.59% python
>>  10132 mailman    1  59    0   27M   24M sleep    1:00  0.46% python
>>  10128 mailman    1  59    0   27M   24M sleep    0:16  0.01% python
>>  10151 mailman    1  59    0 9516K 3852K sleep    0:05  0.01% python
>>  10150 mailman    1  59    0 9500K 3764K sleep    0:00  0.00% python
>> 
>>  Which processes correspond to which runners. And why are the two
>>  processes that have apparently done the least the ones that have grown
>>  the most.
> 
> In contrast, the mail server for python.org shows the following:
> 
> top - 06:54:48 up 29 days,  9:09,  4 users,  load average: 1.05, 1.08, 0.95
> Tasks: 151 total,   1 running, 149 sleeping,   0 stopped,   1 zombie
> Cpu(s):   0.2% user,   1.1% system,   0.0% nice,  98.7% idle
> 
>    PID USER      PR  VIRT  NI  RES  SHR S %CPU    TIME+  %MEM COMMAND
>   1040 mailman    9 42960   0  41m  12m S    0 693:59.44  2.1 ArchRunner:0:1
> -s
>   1041 mailman    9 22876   0  20m 7488 S    0 478:18.62  1.0 BounceRunner:0:1
>   1045 mailman    9 20412   0  19m  10m S    0   3031:12  0.9
> OutgoingRunner:0:
>   1043 mailman    9 20476   0  18m 4968 S    0 127:02.62  0.9
> IncomingRunner:0:
>   1042 mailman    9 18564   0  17m 7316 S    0  11:34.14  0.9
> CommandRunner:0:1
>   1046 mailman   11 17276   0  15m  10m S    1  66:32.16  0.8 VirginRunner:0:1
>   1044 mailman    9 11568   0 9964 5184 S    0  12:34.04  0.5 NewsRunner:0:1
> -s
> 
> And those are the only Python-related processes that show up in the
> first twenty lines.

-- 
Fletcher Cocquyt
Senior Systems Administrator
Information Resources and Technology (IRT)
Stanford University School of Medicine

Email: [EMAIL PROTECTED]
Phone: (650) 724-7485


------------------------------------------------------
Mailman-Users mailing list
Mailman-Users@python.org
http://mail.python.org/mailman/listinfo/mailman-users
Mailman FAQ: http://wiki.list.org/x/AgA3
Searchable Archives: http://www.mail-archive.com/mailman-users%40python.org/
Unsubscribe: 
http://mail.python.org/mailman/options/mailman-users/archive%40jab.org

Security Policy: http://wiki.list.org/x/QIA9

Reply via email to