RE: Java process growth under Linux...leak?

2004-08-31 Thread Mark Maigatter
> Wade Chandler wrote:
>
>> Mark Maigatter wrote:
>> 
>>> We have a Tomcat 5.0.25 based web site for uploading images and assorted
>>> files and managing them.
>>>  
>>> We have found that the Java process that Tomcat is running under is
>>> gradually growing when repetitively processing files uploaded and 
>>> stripped
>>> out of the form submissions by the Apache FileUpload component.  All 
>>> signs
>>> point to a memory leak?
>>>  
>>> Upon the submission of about 500 files we had a 31MB growth in the 
>>> size of
>>> the java process using "top".   
>>> However, the Sun Java jvmstat shows that the Java heap is staying 
>>> relatively
>>> constant.  The "-gc" numbers fluctuate in a manner that shows reasonable
>>> garbage collection activity and the total used across the s0/s1/eden/old
>>> stay with the range for the initial numbers.
>>>  
>>> My question is what would you recommend to isolate the process growth?
>>>  
>>> Is there a way within Java to see the underlying process growth to help
>>> isolate it in the processing cycle?
>>>  
>>> --mark
>>>
>> 
>> How large were the files you were uploading?  I mean  you just said you 
>> uploaded 500 files.  You should expect to see process memory growth. The 
>> JVM has it's own object heap where it manages it's "internal" memory.  
>> Then there is the process and it's memory which is a C heap.

I took the file itself out of the equation and just reference a path to a
series of files that I have on a locally mounted CDROM.  So the form parsing
that FileUpload is doing is just taking apart the form fields to direct
processing.

>> I see that you uploaded 500 files.  Were these one right after the 
>> other, or were they close to simultaneous?  

These were one after another.  Once on form was processed the next was sent.

>> Also, how are you uploading the files?  

This is via a submission of a form within the Java environment to a URL.

>> Are you using some type of a parser?  Are you using the 
>> commons file upload control?  If the JVM reports good memory collection, 
>> then there is no memory leak in Tomcat.  31MB of growth for a process 
>> uploading 500 files shouldn't be that bad depending on how they were 
>> uploaded and file sizes.

The problem is that keeps growing over time.  On a production server, the
process size hits 1.6GB.  Again, the java heap is smaller in the order of
600MB.

>> 
>> Think about it. If the files were between 100kb and 60kb then you have a 
>> total of 30mb to 50mb of memory just in that data alone not including 
>> your application and other buffers you may or may not be creating while 
>> uploading.
>> 
>> For perforamnce reasons the VM isn't going to suddenly resize the heap 
>> as soon as it frees a group of java objects because as far as it knows 
>> you may come along and upload 50mb worth of data immediately after the 
>> first.  This is a performance thing.  Resizing the heap takes time and 
>> cpu resources and affects performance.  The VM will reuse this memory 
>> over and over again.
>> 
>> I would look at any loops I might have reading from the stream.  Do you 
>> create a bunch of small byte array's while uploading the files?  Maybe 
>> you could increase the buffer size, be sure to null them out after you 
>> perform a read to tell them VM you are done with the variable now (for 
>> when the vm collects), and then see if that affects the memory growth.   
>> This should help speed the file upload a bit and get rid of some buffers 
>> in loops a little quicker if you aren't nulling the array.  If however 
>> you are simultaneously uploading much of this data, then a 31mb spike in 
>> memory usage shouldn't be a suprise no matter what.
>> 
>> Basically you can impose limits on the VM and you can use switches to do 
>> this.  You can also devote more memory to eden or survivor objects so 
>> that the VM can make better use of the most commonly used memory.  You 
>> can find more info on this topic and others at this url:
>> http://java.sun.com/docs/performance/
>> Many docs.  One for you might be:
>> http://java.sun.com/docs/hotspot/VMOptions.html
>> scroll down to the bottom and check out options:
>> -XX:NewRatio
>> -XX:NewSize
>> -XX:SurvivorRatio
>> 
>> Basically the defaults for the -server VM are to allow the best 
>> performance for a multi user multi threaded application such as tomcat. 
>>  So, unless you are running out of memory or you need to cripple the app 
>> servers performance by limiting it's growth because you have a bunch of 
>> other applications running on the same server, then I suggest sticking 
>> with the defaults.
>> 
>>Wade
>> 
>> 
>> -
>> To unsubscribe, e-mail: [EMAIL PROTECTED]
>> For additional commands, e-mail: [EMAIL PROTECTED]
>> 
>> 
>> 
>
> Just noticed you wrote about the upload component.  I haven't looked at 
> that code's parse to see how it is handling reading bytes from the 
> stream, but I'm sure

RE: Java process growth under Linux...leak?

2004-08-31 Thread Nandish Rudra
Hi,

Search for Java HotSpot on Google and look for the following java options
ParallelGC and maxHeapRation. Set the JAVA_OPTS with these and solve the
problems. 

Regards,
NR


-Original Message-
From: Mark Maigatter [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 31, 2004 12:21 PM
To: '[EMAIL PROTECTED]'
Subject: Java process growth under Linux...leak?


We have a Tomcat 5.0.25 based web site for uploading images and assorted
files and managing them.
 
We have found that the Java process that Tomcat is running under is
gradually growing when repetitively processing files uploaded and stripped
out of the form submissions by the Apache FileUpload component.  All signs
point to a memory leak?
 
Upon the submission of about 500 files we had a 31MB growth in the size of
the java process using "top".  
 
However, the Sun Java jvmstat shows that the Java heap is staying relatively
constant.  The "-gc" numbers fluctuate in a manner that shows reasonable
garbage collection activity and the total used across the s0/s1/eden/old
stay with the range for the initial numbers.
 
My question is what would you recommend to isolate the process growth?
 
Is there a way within Java to see the underlying process growth to help
isolate it in the processing cycle?
 
--mark

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Java process growth under Linux...leak?

2004-08-31 Thread Wade Chandler
Wade Chandler wrote:
Mark Maigatter wrote:
We have a Tomcat 5.0.25 based web site for uploading images and assorted
files and managing them.
 
We have found that the Java process that Tomcat is running under is
gradually growing when repetitively processing files uploaded and 
stripped
out of the form submissions by the Apache FileUpload component.  All 
signs
point to a memory leak?
 
Upon the submission of about 500 files we had a 31MB growth in the 
size of
the java process using "top".   
However, the Sun Java jvmstat shows that the Java heap is staying 
relatively
constant.  The "-gc" numbers fluctuate in a manner that shows reasonable
garbage collection activity and the total used across the s0/s1/eden/old
stay with the range for the initial numbers.
 
My question is what would you recommend to isolate the process growth?
 
Is there a way within Java to see the underlying process growth to help
isolate it in the processing cycle?
 
--mark

How large were the files you were uploading?  I mean  you just said you 
uploaded 500 files.  You should expect to see process memory growth. The 
JVM has it's own object heap where it manages it's "internal" memory.  
Then there is the process and it's memory which is a C heap.

I see that you uploaded 500 files.  Were these one right after the 
other, or were they close to simultaneous?  Also, how are you uploading 
the files?  Are you using some type of a parser?  Are you using the 
commons file upload control?  If the JVM reports good memory collection, 
then there is no memory leak in Tomcat.  31MB of growth for a process 
uploading 500 files shouldn't be that bad depending on how they were 
uploaded and file sizes.

Think about it. If the files were between 100kb and 60kb then you have a 
total of 30mb to 50mb of memory just in that data alone not including 
your application and other buffers you may or may not be creating while 
uploading.

For perforamnce reasons the VM isn't going to suddenly resize the heap 
as soon as it frees a group of java objects because as far as it knows 
you may come along and upload 50mb worth of data immediately after the 
first.  This is a performance thing.  Resizing the heap takes time and 
cpu resources and affects performance.  The VM will reuse this memory 
over and over again.

I would look at any loops I might have reading from the stream.  Do you 
create a bunch of small byte array's while uploading the files?  Maybe 
you could increase the buffer size, be sure to null them out after you 
perform a read to tell them VM you are done with the variable now (for 
when the vm collects), and then see if that affects the memory growth.   
This should help speed the file upload a bit and get rid of some buffers 
in loops a little quicker if you aren't nulling the array.  If however 
you are simultaneously uploading much of this data, then a 31mb spike in 
memory usage shouldn't be a suprise no matter what.

Basically you can impose limits on the VM and you can use switches to do 
this.  You can also devote more memory to eden or survivor objects so 
that the VM can make better use of the most commonly used memory.  You 
can find more info on this topic and others at this url:
http://java.sun.com/docs/performance/
Many docs.  One for you might be:
http://java.sun.com/docs/hotspot/VMOptions.html
scroll down to the bottom and check out options:
-XX:NewRatio
-XX:NewSize
-XX:SurvivorRatio

Basically the defaults for the -server VM are to allow the best 
performance for a multi user multi threaded application such as tomcat. 
 So, unless you are running out of memory or you need to cripple the app 
servers performance by limiting it's growth because you have a bunch of 
other applications running on the same server, then I suggest sticking 
with the defaults.

Wade
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Just noticed you wrote about the upload component.  I haven't looked at 
that code's parse to see how it is handling reading bytes from the 
stream, but I'm sure there have been many eyes on it.  I would say this 
is probably just a memory usage over time issue (short period of time). 
 Use those links I gave you and play with the memory switches a bit to 
see if you can get some kind of a situation that works best for you. 
The switches:
-XX:MaxHeapFreeRatio
and
-XX:MinHeapFreeRatio

Can make the vm resize the heap differently, but you will want to be 
careful.  You may make things very slow by messing with the defaults for 
those switches.

Wade
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Java process growth under Linux...leak?

2004-08-31 Thread Wade Chandler
Mark Maigatter wrote:
We have a Tomcat 5.0.25 based web site for uploading images and assorted
files and managing them.
 
We have found that the Java process that Tomcat is running under is
gradually growing when repetitively processing files uploaded and stripped
out of the form submissions by the Apache FileUpload component.  All signs
point to a memory leak?
 
Upon the submission of about 500 files we had a 31MB growth in the size of
the java process using "top".  
 
However, the Sun Java jvmstat shows that the Java heap is staying relatively
constant.  The "-gc" numbers fluctuate in a manner that shows reasonable
garbage collection activity and the total used across the s0/s1/eden/old
stay with the range for the initial numbers.
 
My question is what would you recommend to isolate the process growth?
 
Is there a way within Java to see the underlying process growth to help
isolate it in the processing cycle?
 
--mark

How large were the files you were uploading?  I mean  you just said you 
uploaded 500 files.  You should expect to see process memory growth. 
The JVM has it's own object heap where it manages it's "internal" 
memory.  Then there is the process and it's memory which is a C heap.

I see that you uploaded 500 files.  Were these one right after the 
other, or were they close to simultaneous?  Also, how are you uploading 
the files?  Are you using some type of a parser?  Are you using the 
commons file upload control?  If the JVM reports good memory collection, 
then there is no memory leak in Tomcat.  31MB of growth for a process 
uploading 500 files shouldn't be that bad depending on how they were 
uploaded and file sizes.

Think about it. If the files were between 100kb and 60kb then you have a 
total of 30mb to 50mb of memory just in that data alone not including 
your application and other buffers you may or may not be creating while 
uploading.

For perforamnce reasons the VM isn't going to suddenly resize the heap 
as soon as it frees a group of java objects because as far as it knows 
you may come along and upload 50mb worth of data immediately after the 
first.  This is a performance thing.  Resizing the heap takes time and 
cpu resources and affects performance.  The VM will reuse this memory 
over and over again.

I would look at any loops I might have reading from the stream.  Do you 
create a bunch of small byte array's while uploading the files?  Maybe 
you could increase the buffer size, be sure to null them out after you 
perform a read to tell them VM you are done with the variable now (for 
when the vm collects), and then see if that affects the memory growth. 
  This should help speed the file upload a bit and get rid of some 
buffers in loops a little quicker if you aren't nulling the array.  If 
however you are simultaneously uploading much of this data, then a 31mb 
spike in memory usage shouldn't be a suprise no matter what.

Basically you can impose limits on the VM and you can use switches to do 
this.  You can also devote more memory to eden or survivor objects so 
that the VM can make better use of the most commonly used memory.  You 
can find more info on this topic and others at this url:
http://java.sun.com/docs/performance/
Many docs.  One for you might be:
http://java.sun.com/docs/hotspot/VMOptions.html
scroll down to the bottom and check out options:
-XX:NewRatio
-XX:NewSize
-XX:SurvivorRatio

Basically the defaults for the -server VM are to allow the best 
performance for a multi user multi threaded application such as tomcat. 
 So, unless you are running out of memory or you need to cripple the 
app servers performance by limiting it's growth because you have a bunch 
of other applications running on the same server, then I suggest 
sticking with the defaults.

Wade
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Java process growth under Linux...leak?

2004-08-31 Thread Mark Maigatter
We have a Tomcat 5.0.25 based web site for uploading images and assorted
files and managing them.
 
We have found that the Java process that Tomcat is running under is
gradually growing when repetitively processing files uploaded and stripped
out of the form submissions by the Apache FileUpload component.  All signs
point to a memory leak?
 
Upon the submission of about 500 files we had a 31MB growth in the size of
the java process using "top".  
 
However, the Sun Java jvmstat shows that the Java heap is staying relatively
constant.  The "-gc" numbers fluctuate in a manner that shows reasonable
garbage collection activity and the total used across the s0/s1/eden/old
stay with the range for the initial numbers.
 
My question is what would you recommend to isolate the process growth?
 
Is there a way within Java to see the underlying process growth to help
isolate it in the processing cycle?
 
--mark