memory usage (longish)

2002-07-08 Thread Simon Juden

Got a funny problem with memory usage.  Initially assumed likely to be a
linux issue.  Had the problem on RedHat 6.2, upgraded to RedHat 7.3, still
have the problem.  Not sure now if it's a linux or tomcat issue.

Set-up/environment: Red Hat (now 7.3, with ext3), 1xi686 processor, 1.5Gb
memory, Apache 1.3.26, Tomcat 3.2.4, JVM 1.3.1_01 (for the curious there are
plans to upgrade these latter three elements but not yet).  There are four
of these in a webserver farm (hardware-accelerated SSL and loadbalancing
happens before traffic gets near the farm).  The problem happens on two of
the four boxes occasionally, on one fairly often and on the fourth almost
every other day.

There are two ways the problem manifests itself.  The most usual is as
follows.

Run webapp in tomcat (max heap size set to 500Mb).  After some time Tomcat
falls over with a an OutOfMemoryError.  When it does this it is only using
about 150Mb total (so in particular heap well under 500mb max) reported
under ps, top etc.

However the box reports all but a few megs of the 1.5Gb in use.  It reports
all this as still in use when every single java process (and every process
my clients wrote) is shut down.  There's about 60 Mb-worth of usage from the
CPU processes and that's all I can account for.  There is virtually no swap
space usage though there's plenty available.  The output of free (after
tomcat and all non-canonical processes shut down) is:

[root@prodfarm3 logs]# free
 total   used   free sharedbuffers cached
Mem:   15480841351560 196524  0 1069001082072
-/+ buffers/cache: 1625881385496
Swap:   538136  0 538136

Load averages are not high throughout any of this (1.6-1.7 or so).

I know the DMA can take memory (but surely not this much and not for so
long).  I can't for the life of me think what could be using all the rest.  

The other way the problem manifests itself is that Tomcat falls over with an
OutOfMemoryError but there are still several hundred megs still available on
the linux box.  The webapp does not allocate huge objects - I can fully
load it on the test box with a bunch of (robot) users and it uses about 200
megs or so.  So how it finds itself out of memory in this instance I don't
know.  I'm not sure if this is the same problem or a different one.

I've written a few robots and bombarded a test box (identical configuration,
except less physical memory, only 250Mb) with a bunch of concurrent user
sessions.  Different phenomena totally: load average soars (to about 8-9)
but java correctly handles its memory and the thing doesn't die.  However
after an extended run turning tomcat off again leaves 248 of 250Mb reported
in use (but this doesn't seem to be a problem when rerunning stuff).  But my
point is this isn't happening because the app's getting overloaded
(threadpool for tomcat has max size of 200 but it's never used more than
120-130 threads).

Not to put too fine a point on it: help ;-)  Are there any FMs I should be
R'ing but haven't found (I've tried my favourite metamanual (Google) and the
red hat and tomcat list archives)?  Are there configuration settings that
could affect/cause this kind of behaviour?  I'm kind of running out of
ideas...


_

ProQuest Alison

Please note Alison Associates is now ProQuest Alison - this is purely a change of name 
to bring us in line with the rest of the ProQuest group.  As a result our e-mail 
domain has changed to proquestalison.com.  We will also continue to accept messages 
addressed to alisonassociates.com until 1st January 2003.  Please can you ensure that 
you change any address book entries that you have to use the new proquestalison.com 
addresses.  

The information contained in this e-mail and any attached files is intended only for 
the use of the person(s) to whom it is addressed and may be privileged, confidential 
and exempt from disclosure under applicable law. The views of the author may not 
necessarily reflect the views of the Company. If you are not the intended recipient 
please do not copy or convey this message or any attached files to any other person 
but delete this message and any attached files and notify us of incorrect receipt via 
e-mail to [EMAIL PROTECTED]


_
This message has been checked for all known viruses by MessageLabs.

--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]




Re: memory usage (longish)

2002-07-08 Thread JensStutte


I cannot help you on the real problem, but i think both kinds of error are
the same. The missing memory in your free output seems to be gone into
linux' file system buffers and cache (cache: 1082072, nearly 1GB). So
almost every file on your hard drive most likely has been cached ;-). If
any other real process needs memory, linux automatically reduces its
cache. So don't worry too much about low memory, until the cache usage is
high in the output of free.

I don't think, that this is a tomcat issue, but a jvm problem. On our
servers we usually use the IBM jdk 1.3, which for my experience is more
stable in a server environment (while for client applications with swing i
always use a sun jdk) and does not produce these annoying server stops on
garbage collection, if your application uses many little objects. It uses
more memory on startup of the server, but then usually grows slower.

Jens Stutte



   
  
Simon Juden
  
Simon.Juden@ProQuestATo: 
'[EMAIL PROTECTED]' 
lison.com[EMAIL PROTECTED] 
  
  cc:  
  
08/07/2002 12.04  Subject: memory usage (longish)  
  
Please respond to  
  
Tomcat Users List
  
   
  
   
  




Got a funny problem with memory usage.  Initially assumed likely to be a
linux issue.  Had the problem on RedHat 6.2, upgraded to RedHat 7.3, still
have the problem.  Not sure now if it's a linux or tomcat issue.

Set-up/environment: Red Hat (now 7.3, with ext3), 1xi686 processor, 1.5Gb
memory, Apache 1.3.26, Tomcat 3.2.4, JVM 1.3.1_01 (for the curious there
are
plans to upgrade these latter three elements but not yet).  There are four
of these in a webserver farm (hardware-accelerated SSL and loadbalancing
happens before traffic gets near the farm).  The problem happens on two of
the four boxes occasionally, on one fairly often and on the fourth almost
every other day.

There are two ways the problem manifests itself.  The most usual is as
follows.

Run webapp in tomcat (max heap size set to 500Mb).  After some time Tomcat
falls over with a an OutOfMemoryError.  When it does this it is only using
about 150Mb total (so in particular heap well under 500mb max) reported
under ps, top etc.

However the box reports all but a few megs of the 1.5Gb in use.  It reports
all this as still in use when every single java process (and every process
my clients wrote) is shut down.  There's about 60 Mb-worth of usage from
the
CPU processes and that's all I can account for.  There is virtually no swap
space usage though there's plenty available.  The output of free (after
tomcat and all non-canonical processes shut down) is:

[root@prodfarm3 logs]# free
 total   used   free sharedbuffers cached
Mem:   15480841351560 196524  0 1069001082072
-/+ buffers/cache: 1625881385496
Swap:   538136  0 538136

Load averages are not high throughout any of this (1.6-1.7 or so).

I know the DMA can take memory (but surely not this much and not for so
long).  I can't for the life of me think what could be using all the rest.


The other way the problem manifests itself is that Tomcat falls over with
an
OutOfMemoryError but there are still several hundred megs still available
on
the linux box.  The webapp does not allocate huge objects - I can fully
load it on the test box with a bunch of (robot) users and it uses about 200
megs or so.  So how it finds itself out of memory in this instance I don't
know.  I'm not sure if this is the same problem or a different one.

I've written a few robots and bombarded a test box (identical
configuration,
except less physical memory, only 250Mb) with a bunch of concurrent user
sessions.  Different phenomena totally: load average soars (to about 8-9)
but java correctly handles its memory and the thing doesn't die.  However
after an extended run turning tomcat off again leaves 248 of 250Mb reported
in use (but this doesn't seem to be a problem when rerunning stuff).  But
my
point is this isn't happening because

RE: memory usage (longish)

2002-07-08 Thread Simon Juden

Thanks Jens - yes what you say about the memory makes sense (there's not
that much in the cache when the thing's running btw ;-).  

Has anyone else had this kind of issue using tomcat 3.2.4 with a Sun JVM
(specifically 1.3.1_01 on Red Hat Linux 7.3 (Valhalla)) - and if so what
fixed it?

Thanks

S

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: 08 July 2002 13:26
To: Tomcat Users List
Subject: Re: memory usage (longish)



I cannot help you on the real problem, but i think both kinds of error are
the same. The missing memory in your free output seems to be gone into
linux' file system buffers and cache (cache: 1082072, nearly 1GB). So
almost every file on your hard drive most likely has been cached ;-). If
any other real process needs memory, linux automatically reduces its
cache. So don't worry too much about low memory, until the cache usage is
high in the output of free.

I don't think, that this is a tomcat issue, but a jvm problem. On our
servers we usually use the IBM jdk 1.3, which for my experience is more
stable in a server environment (while for client applications with swing i
always use a sun jdk) and does not produce these annoying server stops on
garbage collection, if your application uses many little objects. It uses
more memory on startup of the server, but then usually grows slower.

Jens Stutte

-



Got a funny problem with memory usage.  Initially assumed likely to be a
linux issue.  Had the problem on RedHat 6.2, upgraded to RedHat 7.3, still
have the problem.  Not sure now if it's a linux or tomcat issue.

Set-up/environment: Red Hat (now 7.3, with ext3), 1xi686 processor, 1.5Gb
memory, Apache 1.3.26, Tomcat 3.2.4, JVM 1.3.1_01 (for the curious there
are
plans to upgrade these latter three elements but not yet).  There are four
of these in a webserver farm (hardware-accelerated SSL and loadbalancing
happens before traffic gets near the farm).  The problem happens on two of
the four boxes occasionally, on one fairly often and on the fourth almost
every other day.

There are two ways the problem manifests itself.  The most usual is as
follows.

Run webapp in tomcat (max heap size set to 500Mb).  After some time Tomcat
falls over with a an OutOfMemoryError.  When it does this it is only using
about 150Mb total (so in particular heap well under 500mb max) reported
under ps, top etc.

However the box reports all but a few megs of the 1.5Gb in use.  It reports
all this as still in use when every single java process (and every process
my clients wrote) is shut down.  There's about 60 Mb-worth of usage from
the
CPU processes and that's all I can account for.  There is virtually no swap
space usage though there's plenty available.  The output of free (after
tomcat and all non-canonical processes shut down) is:

[root@prodfarm3 logs]# free
 total   used   free sharedbuffers cached
Mem:   15480841351560 196524  0 1069001082072
-/+ buffers/cache: 1625881385496
Swap:   538136  0 538136

Load averages are not high throughout any of this (1.6-1.7 or so).

I know the DMA can take memory (but surely not this much and not for so
long).  I can't for the life of me think what could be using all the rest.


The other way the problem manifests itself is that Tomcat falls over with
an
OutOfMemoryError but there are still several hundred megs still available
on
the linux box.  The webapp does not allocate huge objects - I can fully
load it on the test box with a bunch of (robot) users and it uses about 200
megs or so.  So how it finds itself out of memory in this instance I don't
know.  I'm not sure if this is the same problem or a different one.

I've written a few robots and bombarded a test box (identical
configuration,
except less physical memory, only 250Mb) with a bunch of concurrent user
sessions.  Different phenomena totally: load average soars (to about 8-9)
but java correctly handles its memory and the thing doesn't die.  However
after an extended run turning tomcat off again leaves 248 of 250Mb reported
in use (but this doesn't seem to be a problem when rerunning stuff).  But
my
point is this isn't happening because the app's getting overloaded
(threadpool for tomcat has max size of 200 but it's never used more than
120-130 threads).

Not to put too fine a point on it: help ;-)  Are there any FMs I should be
R'ing but haven't found (I've tried my favourite metamanual (Google) and
the
red hat and tomcat list archives)?  Are there configuration settings that
could affect/cause this kind of behaviour?  I'm kind of running out of
ideas...



_

ProQuest Alison

Please note Alison Associates is now ProQuest Alison - this is purely a change of name 
to bring us in line with the rest of the ProQuest group.  As a result our e-mail 
domain has changed to proquestalison.com.  We