On 23/02/07, Martin Visser <[EMAIL PROTECTED]> wrote:

I think you'll find the formula dated to the time when most people
said "I really need my total memory address space to be n megabytes,
but I can only possibly afford n/3 megabytes of RAM, so I have to just
make do with 2n/3 being on a relatively slow hard disk."

This certainly applied when I maxed out my first PC, a 486/33 with
8MB RAM back in 1993 [1]. Just being able to run 16MB of RAM+swap


I second that theory, only my experience was with BSD 4.2 on VAX machines -
there it was exactly that way - you wanted lots of memory so the multiple
users running physics simulations for weeks and months won't max it out but
you were limited in amount of RAM you could afford or the system could
handle, so you allocated swap on your disks.

These days, just having to handle too many pages in the swap space could
slow the system down (remember - every swap page requires the system to keep
some meta data about it in memory and maybe on disk as well, looking for it
through the linked lists and such).

Also - if you system is so heavy on memory usage that it uses so much swap
then it's going to be dog slow anyway and you better find another solution (
e.g. add RAM, have another server, optimize the programs running on it etc).

--Amos
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to