I don't know what your programming experience is but your expectations of efficiency are contrary to the basic programming principle: that a program should use only as much memory as it actually needs for completing the task and that memory usage should be optimized.

You can often get a faster program by using more memory. See https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff for examples. As long as the system does not swap, it is the way to go.


Occupying as much RAM as possible just because there is free RAM is meaningless.

Storing in memory data that will never be needed again is, of course, stupid. We are not talking about that.

RAM access is sequential.

You know that RAM means "Random-access memory", don't you? The access is not sequential. Manipulating data that is sequentially stored in RAM is faster because of CPU cache and sequential prefetching: https://en.wikipedia.org/wiki/Cache_prefetching

The idea of CPU cache is, well, that of caching: keeping a copy of recent data/software closer to the CPU because it may have to be accessed again soon. The same idea, at another level, that a program can implement to be faster: keeping data in main memory instead of recomputing it.

Are you also arguing that having free space in the CPU cache has benefits?

On a system with more memory (e.g. 16GB) you can keep more data cached in RAM but that doesn't mean that programs should simply occupy lots or all because there is plenty of it and/or because RAM is faster than HDD.

It kind of means that. You want fast programs, don't you? If that can be achieved by taking more memory, the program will indeed be faster, unless the memory requirements become so huge that the system swaps. So, I ask you again: "How often does your system run out of RAM?". If the answer is "rarely", then choosing programs with higher memory requirements may be a good idea: they can be faster than lightweight alternatives.

It is more time consuming to manage scattered memory blocks and thousand of pointers than reading a whole block at once.

It is not because of fragmentation (which only becomes a problem when the system swaps: free RAM cannot be allocated because too little is available in continuous blocks). It is because of sequential prefetching in the CPU cache. Not an argument against caching. Quite the opposite.

Reply via email to