Dear Steven,

"Steven J. Yellin" wrote:
>     It is my understanding that the memory 'in use' doesn't just include
> the memory being used by all running processes combined.  It also includes
> memory used to store information just in case it is needed.  This extra
> memory can be given up to a process that needs it and has a right to use
> more memory than it already uses.
I read about some 'caching mechanism' somewhere. Is this what you are
talking about? Would be interesting to get a pointer to some
documentation
on this on the Web. ?

>     In tcsh the "limit" command and in bash the "ulimit -a" command will,
> for the user running it, tell if there is any limit on memory usage for
> them.  The user should see "unlimited"; otherwise that could account
> for the problem.
The output I get when being logged in as that user running the jobs on
his machines is:

michael :/home/michael/petrie %limit
cputime         unlimited
filesize        unlimited
datasize        unlimited
stacksize       unlimited
coredumpsize    0 kbytes
memoryuse       unlimited
descriptors     1024 
memorylocked    unlimited
maxproc         14845 
openfiles       1024 

I suppose this settings are alright.

>     If your kernel was compiled with "CONFIG_NOHIGHMEM" ("locate
> Configure.help" to find documentation on this), processes will
> not be able to use more than 1 GB, regardless of the limit/ulimit
> setting.
The kernel is compiled with HIGHMEM Usage (4GB, not 64GB). If the
machine is newly rebooted, running a job which required 3GB is no
problem and there is no strange memory usage. The job can also be
run more than once without any problems. We are currently not able
to reproduce the situation on demand (which would maybe give use a
hint why this is happening, or at least a means to avoid it). It
just happens from time to time...

Urte
--



_______________________________________________
Seawolf-list mailing list
[EMAIL PROTECTED]
https://listman.redhat.com/mailman/listinfo/seawolf-list

Reply via email to