On Tue, 27 Apr 1999, Benno Senoner wrote:

> Hi,
> 
> My system is a Redhat 5.2 running on
> linux 2.2.6 + raid0145-19990421.
> 
> I tested if the system is stable while swapping heavily.
> I tested a regular swap area and a soft-RAID1 (2 disks) swaparea.
> 
> 
> So I wrote e little program wich does basically the following:
> 
> allocate as much as possible blocks of about 4MB,
> 
> then begin to write data in a linear way to each block,
> and displaying the write-performance in MB/sec.
> The program writes a dot on the screen after the write
> of each 1024bytes.
> 
> NOW TO THE VERY STRANGE RESULTS :
> 
> 1)
> my frist BIG QUESTION is if there is a design flaw in malloc() or not:
> 
> when I do (number of successfully allocated blocks)* 4MB
> 
> then I get 2GB of *SUCCESSFULLY* malloc()ed memory,
> but my system has only about 100MB of virtual mem
> (64MB RAM + 40MB swap).

I took a look at your program. It looks as if you are not using the memory
that you malloc. If I remember correctly Linux will not allocate the
memory that you have requested until you use it. This is a really handy
feature when you have sparce arrays. What you need to do is to have your
program touch every page in the virtual memory allocated. If the page size
is 4096 bytes you will have to write every 4096'th byte to insure that the
page is brought into existance.

> 
> Does anyone know why the kernels does not limit the maximum malloc()ed
> memory
> to the amount of RAM+SWAP ?

This is a design choice. If you do this you will limit programms that use
virtual memory in a way that is sparsly populated.

> Will this be changed in the future ?

Other systems pre-commit memory and this can at times cause your system to
stop allowing new processes to run up even though you are not using
anywhere near the total of ram+swap avilable. The choice of using one
allocation method or another may be a possible place for a kernel tuneing
feature. But for me I like the current choice.

> 
> 2)
> 
> At beginning the program runs fine and when the RAM is used up,
> swapping activity begins, and the mem-write performance drops
> to about the write performance of the disk or RAID1 array.
> 
> Now the problem:
> When all RAM + SWAP are used up, then the system begin to freeze,
> and every 10-20secs , there appear messages on the console
> like:
> 
> out of memory of syslog
> out of memory of klogd
> .
> .
> 
> and after a while my swapstress program exits with a Bus Error.
> 
> Sometimes even "update" get killed, or "init"
> which writes: PANIC SEGMENT VIOLATION !
> 
> after the exit of with "Bus Error" of my swapstress program the system
> continues to work, but since init got killed , you can not reboot or
> shutdown the machine anymore.
> 
> Note that I ran swapstress as normal user.
> This means it's easy to crash/render unstable Linux with heavy
> malloc()ing
>  / swapping.

resource exhaustion can cause a number of problems one of which is system
crashes. One solution is to limit the per user memory maximum so that a
single user cannot burn up all the system memory but that still will not
stop the problem.

One possible answer is for the kernel to allways spare some swap space for
tasks running as root and to suspend any user tasks that request memory
when the swap limit is reached. the creation of new user processes should
also be suspended when this limit is reached. at this point an
administrator would be able to login to the system and kill the offending
processes or take some other remedial action.



Alvin Starr                   ||   voice: (416)585-9971
Interlink Connectivity        ||   fax:   (416)585-9974
[EMAIL PROTECTED]              ||

Reply via email to