<quote who="James Newburrie">
> Is this a bang-for-bucks rule of thumb which holds true or is there
> something under linux that I am still to run into?
This is very true for all operating systems these days, but even more so for
UNIX(-like) operating systems such as Linux, because they use RAM far more
aggressively than Windows:
Any memory not directly in use by applications and data is used for file
buffers and caching, which is why you have to chase that up when
calculating memory use on a Linux box - have a look through the SLUG
archives for more detailed explanations of this [basically, use free(1)].
A great example of this in action is the Samba vs. Win2k performance tests
mentioned on the list yesterday. The more memory you throw in a Linux box,
the faster it will be to respond and send data. [1] That's because of this
heavy caching, which is el-neat-o on a file server.
Lots of memory is good, and Linux will use whatever is available to perform
its dastardly deeds. :)
- Jeff
[1] Could/does Samba take advantage of the new zero copy infrastructure in
recent 2.4 kernels?
--
No clue is good clue.
--
SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug