Alfred Perlstein wrote:
> * Matthew Dillon <[EMAIL PROTECTED]> [020223 14:43] wrote:
> >     This is approximately what I am thinking.  Note that this gives us the
> >     flexibility to create a larger infrastructure around the bucket cache,
> >     such as implement per-cpu caches and so on and so forth.  What I have
> >     here is the minimal implementation.
> 
> I strongly object to this implementation right now, please do not
> commit it.  I already explained to you how to make the problem
> go away but instead you insist on some complex api that pins
> memory down like the damn zone allocator.  It's not needed, so
> please don't do it.

Actually, the zone allocator is not far off, I think.

Imagine if the entire possible KVA space (real RAM + swap) was
preallocated PTEs.  Allocations could treat it as anonymous
memory, for which a mapping process was not required, and
all allocations would be interrupt safe by default, without
having to special case the code one way or the other.

This seems, to me, to be the right idea.

The only issue left is that the maps take real memory that is
wired down.  This raises the possibility of adding to the swap
where swap + RAM << KVA && swap + RAM + new_swap <= KVA, which
would imply mappings bneing required on the adding of swap (via
swapon).

Not that painful, but it does imply a 1:1000 limit ratio on
real vs. virtual RAM to get to the page mapping overhead.  4M
pages would cover some of that problem... but making allocations
swappable is often desirable, even in the kernel, so you would
need to special case those mappings... and 4M and 4K pages play
badly together, unless you know what you are doing, and you know
the undocumented bugs (c.v. the recent AMD AGP thing).

-- Terry

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to