[ 
https://issues.apache.org/jira/browse/VELOCITY-223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12489432
 ] 

Christopher Schultz commented on VELOCITY-223:
----------------------------------------------

After looking at the (patched, above) code to VelocityCharStream.java, I'm not 
sure where all the memory savings is coming from.

I'm guessing that the patched version just includes a trip through the 
StringImagePool to 'canonicalize' the strings. The only overhead that is being 
saved is the overhead of additional String objects (keep reading).

The bulk of a String object is usually its character array representing the 
contents of the String. Since Strings are immutable, the Java folks figured 
they'd use that to their advantage and share character arrays between String 
objects created from each other. That means that if you start out with a big 
String and create lots of little substrings from that first once, you get lots 
of objects, but only a single copy of the actual content (one big char array 
and lots of indexes into it).

Has anyone actually observed any significant memory savings from this?

I seem to recall that the minimum byte overhead for any object is about 8 bytes 
(that includes the superclass pointer, etc.). The String class contains 3 
4-byte ints and a reference to the char array (# of bytes depend on the 
architecture, VM, etc.). Assuming a vanilla 32-bit VM with no tricks, we're 
talking about adding 16 bytes to the existing 8 byte overhead for a grant total 
of 24 bytes per String object (remember, the character array should be shared).

Perhaps there are so many of these little objects lying around that simply 
removing all the extra copies of "else" really helps.

Are there other places where tons of memory gets used? The memory map I see 
attached to this bug suggests that the Template objects are responsible for a 
lot of memory. I imagine that an AST is built from the original text of the 
template. Do we continue to keep the text of the template in memory after 
parsing? It seems to me that the template texts themselves could add up if 
they're being kept around in memory.

Anyone care to comment?


> VMs that use a large number of directives and macros use excessive amounts of 
> memory - over 4-6MB RAM per form
> --------------------------------------------------------------------------------------------------------------
>
>                 Key: VELOCITY-223
>                 URL: https://issues.apache.org/jira/browse/VELOCITY-223
>             Project: Velocity
>          Issue Type: Bug
>          Components: Engine
>    Affects Versions: 1.3.1
>         Environment: Operating System: All
> Platform: All
>            Reporter: Christian Nichols
>             Fix For: 1.6
>
>         Attachments: 223-patch.txt, AllVelocityMemoryByClass.html, 
> StringImagePool.java, VelocityCharStream.java, VelocityMemory.JPG
>
>
> Our application FinanceCenter is based on Velocity as the template engine.  
> We 
> have a library of about 200 macros and about 400 VM files.  Because the 
> velocity parser copies the macro body into the VM during parsing, macros that 
> are frequently used (even though identical and using local contexts) use up 
> large amounts of memory.  On our Linux server (running Redhat 7.2 with Sun 
> JDK 
> 1.4.1_04) we can easily use up over 1GB of RAM simply by opening up many 
> forms 
> (about 150) - the server starts out using 60MB after startup.  This memory 
> times out after 5 minutes and is returned which tells me that it is screen 
> memory.  Our problem is that the NT JVM and Linux JVM (32 bit) are currently 
> limited to about 1.6 - 2.0 GB of ram for heap space.  Thus, using a fair 
> number 
> of forms in the application leaves little space for user session data.
> We have implemented a caching mechanism for compiled templates and integrated 
> it into Velocity so that cached objects are timed out of the cache but the 
> server is still using large amounts of memory.  We finally had to rewrite 
> many 
> of our macros into Java so that memory usage would be reduced (note that 
> these 
> macros were doing complex screen formatting not business logic).  Doing this 
> has reduced our memory by about 30%.  This is currently our biggest issue 
> with 
> Velocity and is causing us to review our decision to stay with Velocity going 
> forward.  This is because we will likely end up with close to 1,000 forms by 
> the end of next year and need to know that Velocity can deal with this.  Is 
> there any work underway to share compiled macro AST's?  This would greatly 
> reduce the amount of memory used.  I have reviewed the parser code that is 
> doing this but it seems that this is an embedded part of the design and not 
> easily changed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to