First, alot of this stuff is slowly sinking in ... after repeatedly
reading it and waiting for the headache to disapate:)

But, one thing that I'm still not clear on ...

If I have 4Gig of RAM in a server, does it make any sense to have swap
space on that server also?  Again, from what I'm reading, I have a total
of 4Gig *aggregate* to work with, between RAM and swap, but its right here
that I'm confused right now ... basically, the closer to 4Gig of RAM you
get, the closer to 0 of swap you can have?

On Mon, 22 Apr 2002, Terry Lambert wrote:

> "Marc G. Fournier" wrote:
> > > No, there's no stats collected on this stuff, because it's a
> > > pretty obvious and straight-forward thing: you have to have a
> > > KVA space large enough that, once you subtract out 4K for each
> > > 4M of physical memory and swap (max 4G total for both), you
> > > end up with memory left over for the kernel to use, and your
> > > limits are such that the you don't run out of PTEs before you
> > > run out of mbufs (or whatever you plan on allocating).
> >
> > ... and translated to english, this means? :)
> >
> > Okay, I'm going to assume that I'm allowed 4Gig of RAM + 4Gig of Swap, for
> > a total of 8Gig ... so, if I subtract out 4K for each 4M, that is 8M for
> > ... what?
> >
> > So, I've theoretically got 8184M of VM available for the kernel to use
> > right now?  what are PTEs and how do I know how many I have right now?  as
> > for mbufs, I've currently got:
> No.
> Each 4M of physical memory takes 4K of statically allocated KVA.
> Each 4M of backing store takes 4K of statically allocated KVA.
> The definition of "backing store" includes:
> o     All dirty data pages in swap
> o     All dirty code pages in swap
> o     All clean data pages in files mapped into process or kernel
>       address space
> o     All clean code pages for executables mapped into process or
>       kernel address space
> o     Reserved mappings for copy-on-write pages that haven't yet
>       been written
> A PTE is a "page table entry".  It's the 32 bit value in the page
> table for each address space (one for the kernel, one per process).
> See the books I posted the titles of for more details, or read the
> Intel processor PDF's from their developer web site.
> > jupiter> netstat -m
> > 173/1664/61440 mbufs in use (current/peak/max):
> >         77 mbufs allocated to data
> >         96 mbufs allocated to packet headers
> > 71/932/15360 mbuf clusters in use (current/peak/max)
> > 2280 Kbytes allocated to network (4% of mb_map in use)
> > 0 requests for memory denied
> > 0 requests for memory delayed
> > 0 calls to protocol drain routines
> >
> >         So how do I find out where my PTEs are sitting at?
> The mbufs are only important because most people allocate a
> large number of mbufs up front for networking applications, or
> for alrge numbers of users with network applications that will
> need resources in order to be able to actually run.  There's
> also protocol control blocks and other allocation that occur
> up front, based on the maximum number of system open files
> and sockets you intend to permit.
> The user space stuff is generally a lot easier to calculate:
> do a "ps -gaxl", round each entry in the "VSZ" column up to
> 4M, divide by 4K, and that tells you how many 4K units you
> have allocated for user space.  For kernel space, the answer
> is that there are some allocated at boot time, (120M worth),
> and then the kernel map is grown, as necessary, until it hits
> the KVA space limit.  If you plan on using up every byte, then
> divide your total KVA space by 4K to get the number of 4K pages
> allocated there.
> For the kernel stuff... you basically need to know where the
> kernel puts how much memory, based on the tuning parameters
> you use on it.
> -- Terry

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to