* Matthew Dillon <[EMAIL PROTECTED]> [020223 12:51] wrote:
> :Here is the most up-to-date version of pgrp/session lock (at Change 6700):
> :
> :http://people.FreeBSD.org/~tanimura/patches/pgrp10.diff.gz
> :
> :I would like to commit this on the next Sunday. Otherwise, my patch
> :would conflict with other patches, especially tty.
> :
> :-- 
>     Do you have any plans to get pgdelete() out from under Giant?  That
>     would allow leavepgrp(), doenterpgrp(), enterpgrp(), enterthispgrp(),
>     setsid() (mostly) to be taken out from under Giant, and perhaps a few
>     others.
>     I was thinking of simply having a free list of sessions and process
>     groups, locked by PGRPSESS_XLOCK().  pgdelete() would then not have
>     to call FREE() and setsid() would almost always be able to pull a new
>     structure of the appropriate free list and thus not have to obtain Giant
>     for the MALLOC.

All these per-subsystem free-lists are making me nervous in both
complexity and wasted code...

Ok, instead of keeping all these per-subsystem free-lists here's what
we do:

In kern_malloc:free() right at the point of
  if (size > MAXALLOCSAVE) we check if we have Giant or not.
    if we do not then we simply queue the memory
    however, if we do then we call into kmem_free with all the queued memory.

This ought to solve the issue without making us keep all these
per-cpu caches.

By the way, since "MAXALLOCSAVE" is some multiple of PAGE_SIZE, we
really don't have to worry about it when freeing small structures
although that puts evilness into malloc(9) consumers.

Can you please consider that instead of continueing down this path
of per-subsystem caches?

-Alfred Perlstein [[EMAIL PROTECTED]]
'Instead of asking why a piece of software is using "1970s technology,"
 start asking why software is ignoring 30 years of accumulated wisdom.'
Tax deductible donations for FreeBSD: http://www.freebsdfoundation.org/

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to