Resources are always scarce (limited) and should be used responsibly.

You need free RAM for handling new processes and peak loads.

RAM is not sequential in the sense that it is like a rewinding tape but: you can't pass the whole RAM through the CPU in a single CPU clock. There is the intrinsic sequential nature of all discrete systems.

The approach to speed must be evaluated based on the specific algorithm and on the specific hardware. Different programs have different requirements. Some CPUs allow more parallel executions and algorithms.

Python is an interpreted language and you don't know how the interpreter handles the data internally. A valid test would be near the hardware level, perhaps in assembler. Then you can measure how many exact cycles a program needs.

> Controls for swapping don't "keep more memory free".

What does min_free_kbytes do?

> Swapping only occurs if your RAM is past full, therefore requiring the use of the disk.

Swapping occurs when the kernel commands for a swap and that can happen even when there is free RAM.

> Fragmented RAM is not going to make a meaningful difference to access speed in real terms.

I was talking about the algorithmic memory fragmentation which results in extra CPU cycles.

> .. the multiple gigabytes of RAM typically available ..

Till recently I had only 512MB on my old laptop (now it has the maximum possible 2GB). Running a browser like Firefox on 512MB resulted in swapping. If you assume that software should be bloated and incompatible with older hardware, you will never create a lightweight program.

> But how about you prove that running a program on a system using most of its RAM (but without swapping) is slower than on a system using only half its RAM?

[~]: time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=1k count=1000 oflag=nocache"
1000+0 records in
1000+0 records out
1024000 bytes (1.0 MB, 1000 KiB) copied, 0.00300541 s, 341 MB/s
real    0m0.007s
user    0m0.000s
sys     0m0.007s
[~]: time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=1k count=1000000 oflag=nocache"
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB, 977 MiB) copied, 1.31329 s, 780 MB/s
real    0m1.317s
user    0m0.244s
sys     0m1.068s

Reply via email to