Hi,
My system is a Redhat 5.2 running on
linux 2.2.6 + raid0145-19990421.
I tested if the system is stable while swapping heavily.
I tested a regular swap area and a soft-RAID1 (2 disks) swaparea.
So I wrote e little program wich does basically the following:
allocate as much as possible blocks of about 4MB,
then begin to write data in a linear way to each block,
and displaying the write-performance in MB/sec.
The program writes a dot on the screen after the write
of each 1024bytes.
NOW TO THE VERY STRANGE RESULTS :
1)
my frist BIG QUESTION is if there is a design flaw in malloc() or not:
when I do (number of successfully allocated blocks)* 4MB
then I get 2GB of *SUCCESSFULLY* malloc()ed memory,
but my system has only about 100MB of virtual mem
(64MB RAM + 40MB swap).
How hopes the kernel to squeeze the allocated 2GB in 100MB of virtual
mem ?
:-)
Does anyone know why the kernels does not limit the maximum malloc()ed
memory
to the amount of RAM+SWAP ?
Will this be changed in the future ?
2)
At beginning the program runs fine and when the RAM is used up,
swapping activity begins, and the mem-write performance drops
to about the write performance of the disk or RAID1 array.
Now the problem:
When all RAM + SWAP are used up, then the system begin to freeze,
and every 10-20secs , there appear messages on the console
like:
out of memory of syslog
out of memory of klogd
.
.
and after a while my swapstress program exits with a Bus Error.
Sometimes even "update" get killed, or "init"
which writes: PANIC SEGMENT VIOLATION !
after the exit of with "Bus Error" of my swapstress program the system
continues to work, but since init got killed , you can not reboot or
shutdown the machine anymore.
Note that I ran swapstress as normal user.
This means it's easy to crash/render unstable Linux with heavy
malloc()ing
/ swapping.
You can find my swapstress program at
http://www.gardena.net/benno/linux/swapstress.tgz
please let me know your result. (crash / lockup .. ? )
comments please, especially form the kernel gurus !
regards,
Benno.