> The point is that if you can spare RAM, ideally, you should be using all of
it. In a perfect world, the programs you're running would use every byte of
RAM available and then release it to new programs as they launch. We of
course don't live in a perfect world, so some inefficiency (i.e. leaving RAM
unused) is inevitable. Thankfully, Linux makes use of that RAM for disk
caching while it waits to allocate it to a program, so it's not a total loss.
I don't know what your programming experience is but your expectations of
efficiency are contrary to the basic programming principle: that a program
should use only as much memory as it actually needs for completing the task
and that memory usage should be optimized.
Occupying as much RAM as possible just because there is free RAM is
meaningless. You seem to assume that just because RAM is faster than HDD we
should fill up every single bit of it. That is just as incorrect as saying
that you must always use all CPU registers even if a simple assembler
operation (like MOV AX,BX) requires only 2 registers. Of course it is much
faster to use CPU registers than RAM but efficiency in programming is the art
of optimizing resource usage, not of wasting resources.
Further: RAM's speed is not infinite and RAM access is sequential. If a
program has to manipulate huge memory blocks that will surely be slower. The
Linux kernel can be tuned to work in direction of keeping more memory free
(swapping more aggressively) or to keep cache in RAM for longer.
Obviously on a system with less RAM (e.g. 2GB) you would want to have more
free memory at all times for the reasons I explained earlier. On a system
with more memory (e.g. 16GB) you can keep more data cached in RAM but that
doesn't mean that programs should simply occupy lots or all because there is
plenty of it and/or because RAM is faster than HDD.
> RAM doesn't "fragment" in any meaningful way. It's random-access.
RAM does fragment. Being random access has nothing to do with fragmentation.
It is more time consuming to manage scattered memory blocks and thousand of
pointers than reading a whole block at once. Each reading and writing to
memory takes CPU cycles, so the less memory you use, the faster your program
will be. So it is not that your whole 16GB of RAM are available at a single
CPU cycle. Computers are discrete systems and although some resources can be
paralleled, each discrete system works sequentially and the frequency matters
(that's why there are people who do overclocking etc).
So back to what lightweight means: Usually that implies low resource usage,
not exhausting every single bit of the system (which creates a heavy weight
for the system).