Steven McElwee writes:
> I'd just like to add that a sparc5's memory management unit only supports
> a mere 256 context switches which is a far cry short of the sparc20's 
> 65,536. Performance of a loaded sparc5 in a busy AFS cell running the 
> database server process suite is going to suffer a great deal more than that 
> of a similarly loaded sparc20.

A sparc5 is indeed a slower machine, and won't be nearly as nice a
database server, but the # of MMU contexts is not likely to be
the problem.

In most implementations of AFS, even though an individual
server process might have many LWP threads, the existance
of these threads is invisible to the OS, which only sees
one Unix process, with one Unix address space, and needing
one Unix MMU context.  On the umich.edu database servers, there
are usually only 32 processes in existance, and hence even
256 MMU contexts would be very sufficient.

Under Solaris, the picture is not necessarily as clean.  The latest
versions of AFS include support for Solaris pthreads, at least
at the source level.  Depending on how solaris is implemented,
there *could* be a difference.  There certainly are differences
inside the kernel, although they aren't likely to be real
significant.  DVMA may use up one or more MMU contexts,
and in certain cases, interrupt servicing can require
a thread context inside the kernel.  There aren't *that*
many things that do interrupt and DMA inside the kernel,
however, so it's not likely to come even *close* to using
up 256 contexts.  My *suspicion* is that all the
threads in one solaris process share the same MMU context,
and hence there wouldn't be any additional need for MMU
contexts, *but*, if individual solaris threads do in fact
use different MMU contexts, there could be an increased demand
for MMU contexts.  The increase is not likely to be large
however.  Most of the AFS database applications actually
have a rather small thread pool for servicing requests; for
instance; the ptserver is only willing to service about 3
outstanding requests at once, and hence even in the worst case,
256 MMU contexts is still likely to be plenty.  In certain
very odd cases, it's *possible* that having a very large #
of MMU contexts could be a *disadvantage*.  For instance,
a system where there were many many processes sharing
a large common shared memory object which was being
demand paged.

More likely bottleneck in the sparc5/sparc20 include the MMU context
switching rate, network throughput speed, and disk I/O throughput.  Some
of these are strongly dependent on CPU speed, and so indeed the sparc20
is likely to enjoy a measurable advantage.  Nearly all the advantage
should be just because it's faster, not because it has more
MMU contexts.

Older Sun's in fact have fewer MMU contexts.  The sun-1 board had
16 contexts, & the sun-2 and sun-3 had 8 contexts.  On those
machines, it was indeed possible to measure a large drop-off in
performance as soon as you exceeded the MMU contexts and it had to
start reloading them on each context switch.  On such a machine,
having more MMU contexts would indeed be an advantage to being
a database server.  I don't know of many people using sun-3's
as database servers today.

                                -Marcus Watts
                                UM ITD PD&D Umich Systems Group

Reply via email to