And also because DRAM is accessed page-wise. Changing page is much more
expensive than accessing data on the same (already selected) page.
Yes, there is that too. And accessing recent pages is fast thanks to yet
another cache, the translation lookaside buffer:
https://en.wikipedia.org/wiki/Translation_lookaside_buffer
This, along with onpon4's similar views, is overlooking a basic fact: That
the kernel is already using free memory for data caching. A user space
program attemting to do its own data caching is a grave error (a bug, in
essence) because it tries to overtake kernels job on itself, rather
selfishly.
The kernel cannot know a costly function will be frequently called with the
same arguments and will always return the same value given the same arguments
(i.e., does not depend on anything but its arguments). A cache at the
application-level is not reimplementing the caches at system-level.
And how would you really check that from within a user space program and take
necessary steps?
I am not suggesting that the program should do that. I am only saying that
there is no benefit in choosing "lightweight applications" and always having
much free RAM. That it is a waste of RAM. If you always have much free RAM,
you had better choose applications that require more memory to be faster.
The obvious thing to do is that, you must allocate no more RAM than you
really need, and leave the rest (deciding what to do with free RAM) to the
kernel.
An implementation strategy that minimizes the space requirements ("no more
RAM than you really need") will usually be slower that alternatives that
require more space. As with the one-million-line examples I gave to heyjoe.
Or with the examples on
https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff