We have a Tomcat 5.0.25 based web site for uploading images and assorted
files and managing them.
We have found that the Java process that Tomcat is running under is
gradually growing when repetitively processing files uploaded and stripped
out of the form submissions by the Apache FileUpload component. All signs
point to a memory leak?
Upon the submission of about 500 files we had a 31MB growth in the size of
the java process using "top". However, the Sun Java jvmstat shows that the Java heap is staying relatively
constant. The "-gc" numbers fluctuate in a manner that shows reasonable
garbage collection activity and the total used across the s0/s1/eden/old
stay with the range for the initial numbers.
My question is what would you recommend to isolate the process growth?
Is there a way within Java to see the underlying process growth to help
isolate it in the processing cycle?
--mark
How large were the files you were uploading? I mean you just said you uploaded 500 files. You should expect to see process memory growth. The JVM has it's own object heap where it manages it's "internal" memory. Then there is the process and it's memory which is a C heap.
I see that you uploaded 500 files. Were these one right after the other, or were they close to simultaneous? Also, how are you uploading the files? Are you using some type of a parser? Are you using the commons file upload control? If the JVM reports good memory collection, then there is no memory leak in Tomcat. 31MB of growth for a process uploading 500 files shouldn't be that bad depending on how they were uploaded and file sizes.
Think about it. If the files were between 100kb and 60kb then you have a total of 30mb to 50mb of memory just in that data alone not including your application and other buffers you may or may not be creating while uploading.
For perforamnce reasons the VM isn't going to suddenly resize the heap as soon as it frees a group of java objects because as far as it knows you may come along and upload 50mb worth of data immediately after the first. This is a performance thing. Resizing the heap takes time and cpu resources and affects performance. The VM will reuse this memory over and over again.
I would look at any loops I might have reading from the stream. Do you create a bunch of small byte array's while uploading the files? Maybe you could increase the buffer size, be sure to null them out after you perform a read to tell them VM you are done with the variable now (for when the vm collects), and then see if that affects the memory growth. This should help speed the file upload a bit and get rid of some buffers in loops a little quicker if you aren't nulling the array. If however you are simultaneously uploading much of this data, then a 31mb spike in memory usage shouldn't be a suprise no matter what.
Basically you can impose limits on the VM and you can use switches to do this. You can also devote more memory to eden or survivor objects so that the VM can make better use of the most commonly used memory. You can find more info on this topic and others at this url:
http://java.sun.com/docs/performance/
Many docs. One for you might be:
http://java.sun.com/docs/hotspot/VMOptions.html
scroll down to the bottom and check out options:
-XX:NewRatio
-XX:NewSize
-XX:SurvivorRatio
Basically the defaults for the -server VM are to allow the best performance for a multi user multi threaded application such as tomcat. So, unless you are running out of memory or you need to cripple the app servers performance by limiting it's growth because you have a bunch of other applications running on the same server, then I suggest sticking with the defaults.
Wade
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
