[ 
http://issues.apache.org/jira/browse/VELOCITY-193?page=comments#action_12331019 
] 

Henning Schmiedehausen commented on VELOCITY-193:
-------------------------------------------------

I tried the test program with a lot of memory (-Xmx1024m) and found the 
following interesting fact (I added a Thread.sleep(longtime) so that the 
program kept running):

When looking at the open file of the program:

java    13535 henning    3w   REG  253,0     3092  1622746 /tmp/velocity.log
java    13535 henning    4w   REG  253,0     3092  1622746 /tmp/velocity.log
java    13535 henning    5w   REG  253,0     3092  1622746 /tmp/velocity.log
java    13535 henning    6w   REG  253,0     3092  1622746 /tmp/velocity.log
java    13535 henning    7w   REG  253,0     3092  1622746 /tmp/velocity.log
java    13535 henning    8w   REG  253,0     3092  1622746 /tmp/velocity.log
java    13535 henning   11w   REG  253,0     3092  1622746 /tmp/velocity.log
java    13535 henning   12w   REG  253,0     3092  1622746 /tmp/velocity.log
java    13535 henning   13w   REG  253,0     3092  1622746 /tmp/velocity.log

You see, that a number of engines is still running (because they keep the log 
file open). If you reduce the memory (-Xmx7m -Xms1m), there will be less. So 
the engines are only reclaimed by Garbage collection.

IMHO we should add an optional shutdown() method to the engine, allowing people 
to explicitly shut the engine down. If we use this in the anakia and texen 
tasks, we can at least help the people using the ant tasks.

> Memory/logger leak with multiple VelocityEngine instances
> ---------------------------------------------------------
>
>          Key: VELOCITY-193
>          URL: http://issues.apache.org/jira/browse/VELOCITY-193
>      Project: Velocity
>         Type: Bug
>   Components: Source
>     Versions: 1.3.1
>  Environment: Operating System: All
> Platform: Macintosh
>     Reporter: Ian Beatty
>     Priority: Critical
>      Fix For: 1.5
>  Attachments: VelocityBugIsolator.tgz
>
> When creating and then releasing to garbage collection multiple 
> VelocityEngine instances, the 
> instances are apparently not closing out or otherwise letting go of their 
> logger instances. As a 
> result, code that needs to create and destroy several VelocityEngine 
> instances will eventually choke 
> and die. This happens with either Avalon Logkit or Log4j, although the exact 
> nature of the choking 
> differs. This test program isolates the problem:
> import org.apache.velocity.app.VelocityEngine;
> public class IsolateVelocityBug {
>       static public void main( String[] args ) {
>               int repCount = Integer.parseInt( args[0] );
>               for( int i = 0; i < repCount; i++ ) {
>                       System.out.println( "Test repetition " + i + "..." );
>                       try {
>                               final VelocityEngine velocityEngine = new 
> VelocityEngine();
>                               velocityEngine.init();
>                       } catch( Exception e ) {
>                               throw new Error( e );
>                       }
>               }
>       }
> }
> Run the program with an integer command-line argument specifying the number 
> of times to cycle 
> through the loop, and make sure velocity-1.3.1.jar, commons-collections.jar, 
> and either an Avalon 
> Logkit or Log4j JAR are on your classpath. (I tested with logkit-1.0.1.jar 
> and log4j-1.1.3.jar.) What 
> *should* happen is that the program completes its specified number of loops, 
> doing nothing but 
> writing "Test repetition" over and over with an incrementing number. What 
> *does* happen, at least 
> on my machine, depends on which logging package is provided for Velocity.
> Using Avalon Logkit 1.0.1, the program runs fine for 252 iterations; on the 
> 253nd, it aborts with 
> the following message:
>     "PANIC : Error configuring AvalonLogSystem : 
> java.io.FileNotFoundException: /Users/ibeatty/
> Development/javaDev/VelocityBugIsolator/velocity.log (Too many open files)"
> Using Log4j 1.1.3, the program runs fine for only one iteration; on the 
> second and any subsequent 
> iterations, it continues but prints out a whole mess of
>     "log4j:ERROR Attempted to append to closed appender named [null].
>      log4j:WARN Not allowed to write to a closed appender."
> That happens for as long as I care to let it run (95 iterations, with 
> something over 800 lines of 
> such errors per iteration by the end).
> To me, it sure looks like Velocity is leaving dangling loggers behind as 
> VelocityEngine instances 
> are created and discarded, and that the two logging systems respond 
> differently to this but both 
> have problems.
> Why, might you ask, should anyone care about making many VelocityEngine 
> instances? I ran into it 
> when developing a major web app using JUnit to build comprehensive test 
> suites. To run 
> independently, every test has to start from scratch, which means getting its 
> own VelocityEngine. 
> Many tests means many instances, and the logging problem kicks in. Running 
> JUnit test suites 
> within Intellij IDEA and using Log4j, the ERROR/WARN messages were more than 
> a nuicanse; 
> eventually, I'd start getting out-of-memory errors, too. These went away when 
> I changed the tests 
> to use a shared VelocityEngine instance (which caused its own set of 
> problems).
> Using binary download of Velocity 1.3.1, which claims to have been created on 
> 2003-04-01.
> I find it hard to believe nobody else has tripped over this before, so maybe 
> it's sensitive to the OS 
> or something. It happened whether I compiled the test code with Javac or 
> Jikes. Using Java 
> 1.4.1_01.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to