the hash algorithm that controls bucket placement for the stat
   structures in the kernel degenerates if you have more than about
   8000 stat entries.  all the entries go into three or four buckets
   (as opposed to hundreds or thousands), so the buckets get real big.

   each bucket has a lock on it, so when searching and modifying
   stat structures takes a long time, everything backs up behind
   the hash bucket locks.

   my 2 pfennigs worth.

--
        Chuck Lever - [EMAIL PROTECTED]
          U-M ITD Login service team

buhrow says:
<       Hello.  I've been experimenting with cache tuning on our time share
<  systems with the aide of a document from Transarc describing how much
<  memory each of the various buffers takes and what the various switches do
<  to modify those buffer spaces.  I believe I've worked out a reasonable
<  theory of where we're taking it in the shorts.  There are a lot of
<  processes that run stat(2) against a large number of files on our systems.
<  In order to reduce the overhead associated with  going off to the file
<  server to retrieve a bunch of stat information, causing delays for the
<  users, I've cranked up the -stat option to about 10MB worth of data or
<  27306 stat entries on our heavily loaded machines.  This has had a dramatic
<  effect on our performance and things are definitely improved.  However, we
<  began to see periodic load spikes, quick jumps in the load, followed by a
<  reasonably fast wind down to normal operating temperature.  Thinking that
<  this might be due to periodically running out of stat entries for our 87500
<  cache entries, I tried cranking the -stat option all the way up to 87500.
<  Hey, these machines have hundreds of MB of memory, so let's use it.  Then,
<  the machine I was using for testing wouldn't boot multi-user.  Figuring I'd
<  run it out of kernel virtual memory, I brought the -stat number down to
<  54612, twice my original estimate of 10MB of data for stat space.  The
<  machine came up multi-user, but after an hour of hard work, it displayed
<  the same symptoms we saw in November when we experienced a rather serious
<  memory leak in the Sun4M Kernel module.  That is, the load would spiral out
<  of control and all useful work would come to an end.
<       Now, the machine is where it was before I began 
<  fine tuning, and I'm wondering if anyone knows how much kernel memory one
<  can consume on a SunOS 4.X system?  Are there ways to expand the amount of
<  memory available to AFS?
<  
<  Thanks for your time and any suggestions you might have.
<  -Brian
<  

Reply via email to