> your expectations of efficiency are contrary to the basic programming principle: that a program should use only as much memory as it actually needs for completing the task and that memory usage should be optimized.

That is only a "basic programming principle" if RAM is scarce. RAM is not scarce in modern computers. Since we're talking about Web browsers, let's look at those as an example: I looked up benchmarks for Web browsers, and Google Chrome on GNU/Linux seems to use the most RAM out of all the major browsers at around 1.5GB. That amount is no problem if you have 4, 8, 16 GB of RAM. And modern computers do have that much RAM, or even more than that. It's not 1999 anymore.


> efficiency in programming is the art of optimizing resource usage, not of wasting resources.

Using RAM that isn't being used for anything else is not "wasting resources". What is wasting resources is spending CPU time (which uses an actual resource, electricity) redundantly to save RAM that you don't need to save.

> RAM's speed is not infinite and RAM access is sequential.

RAM is random-access, not "sequential". It's kind of in its name. As for speed, yeah, of course it takes time, but not that much. Recalculating redundantly almost always takes longer.

Here, I'll prove it:

https://pastebin.com/3tZ59K6m
https://pastebin.com/qZsu0651

The first one uses variables. The second one recalculates everything only based on the original three variables, i.e. avoids unnecessary RAM use. I get about 13.5 seconds with the one that uses RAM freely, and about 19 seconds (much slower) with the one that recalculates everything redundantly.

> The Linux kernel can be tuned to work in direction of keeping more memory free (swapping more aggressively) or to keep cache in RAM for longer.

Controls for swapping don't "keep more memory free". Swapping only occurs if your RAM is past full, therefore requiring the use of the disk. Which is always much slower than using RAM, hence why if you're swapping, you need to cut down your RAM use.

> but that doesn't mean that programs should simply occupy lots or all because there is plenty of it and/or because RAM is faster than HDD.

They should use all the RAM they have a use for. I never said that programs should throw meaningless data into RAM just for laughs.

> Being random access has nothing to do with fragmentation.

But it does have to do with the consequences of fragmentation. Fragmented RAM is not going to make a meaningful difference to access speed in real terms. A fragmented disk is going to cause problems because you can only access one physical location at a time.

> It is more time consuming to manage scattered memory blocks and thousand of pointers than reading a whole block at once.

I think "thousands" is a bit of a stretch, to say the least. Most of the time you're allocating RAM, it's such a tiny, tiny fraction of how much RAM is available.

Let's say you malloc for an array of 10,000 64-bit integers. That's 640,000 bits = 80,000 bytes = 80 KB. That's a tiny, tiny fraction of the multiple gigabytes of RAM typically available, and most of the time you're not using arrays that huge.

But how about you prove that running a program on a system using most of its RAM (but without swapping) is slower than on a system using only half its RAM? It would be a lot more convincing than a bunch of ifs, buts, and maybes.

> So back to what lightweight means: Usually that implies low resource usage, not exhausting every single bit of the system (which creates a heavy weight for the system).

That's not a clarification. It's too vague to have any meaning.

Reply via email to