170Mb of macros sounds like a leak.

I spent some time trying to figure out how to write the invitation
app more defensively and started looking for the cause of
http://jira.xwiki.org/jira/browse/XWIKI-4934
I wrote up a test which dumps all of the names of the working
velocity engines and the namespaces in each engine, the macros in
each namespace and the memory location of the name of the macro
(using identityHashCode which gives the location in most java
implementations). The macro names appear to be interned so counting
references to the memory location in the heap analysis tool should
show any duplicate macro leaks.

{{html clean=false}}<style>p {font-family:monospace;}</style>{{/html}}

{{groovy}}
vf =
com.xpn.xwiki.web.Utils.getComponent(org.xwiki.velocity.VelocityFactory.class);
println(vf.velocityEngines.size());
for (String engineName : vf.velocityEngines.keySet()) {
  println("Velocity engine: " + engineName);
  vmm =
vf.velocityEngines.get(engineName).engine.ri.vmFactory.vmManager;
  println("    Velocity Macro Namespaces:")
  for (String x : vmm.namespaceHash.keySet()) {
    if (vmm.globalNamespace.equals(vmm.namespaceHash.get(x))) {
      println("        Global Macro Namespace: " + x + " (" +
vmm.namespaceHash.get(x).size() + " entries)");
    } else {
      println("        Namespace: " + x + " (" +
vmm.namespaceHash.get(x).size() + " entries)");
    }
    for (String y : vmm.namespaceHash.get(x).keySet()) {
      print("            #" + y);
      for (int i = y.length(); i < 30; i++) {
         print(" ");
      }
      println("java.lang.String@" +
Integer.toString(System.identityHashCode(y), 16));
    }
    println("\n\n");
  }
}
{{/groovy}}


Not much to see on my local installation except that all syntax 2.0
velocity macros are in the same namespace. Might be more interesting
running on a large system.


Caleb


Ludovic Dubost wrote:
> 
> Hi developers,
> 
> It would be great to see some developers be interested in this thread.
> We need to better understand memory usage by XWiki in order to achieve
> higher throughput with controled memory usage.
> 
> I've found an additional interesting tools to use, the Eclipse Memory
> Analyzer which works with a dump retrived using the command "jmap
> -heap:format=b <processid>"
> (This is practical because we can get such a dump on any running VM, and
> we can even configure the VM to give such a dump when hitting OutOfMemory)
> 
> It gives some interesting results. I retrieved a dump from myxwiki.org
> and analyzed it a bit
> 
> http://www.zapnews.tv/xwiki/bin/download/Admin/MemoryUsage/myxwikiorgmem.png
> 
> 
> As the following image shows it, we have a significant amount of memory
> in the velocity package in a structure meant to store all velocity macros.
> It's 170Mb which represents 37% of the heap and which is more than the
> document size.
> 
> I suspect that if we are able to achieve this amount we can achieve more
> and reach OutOfMemory with only this module.
> There is a chance that it is linked to MultiWiki usage where macros are
> kept in a different context for each wiki, but it could be also
> something growing regularly every time a macro is found in a page.
> Even if it is growing by number of wikis, it is still potentially a
> scalability issue. I already analyzed memory a long time ago and did not
> see Velocity as storing a lot of information. This could be linked to
> the new implementation in component mode.
> 
> Velocity+JBoss cache seem to hold at least 70% of the consumed heap.
> This is clearly the area to focus on and verify  that we can keep it in
> control.
> 
> Ludovic
> 
> Le 07/05/10 16:50, Ludovic Dubost a écrit :
>> Hi developers,
>>
>> A while ago I was looking for some ways to track how much memory is
>> used by our internal cache and was not able to find anything.
>> I've tried it again and this time I found the following code:
>>
>> http://www.javamex.com/classmexer/
>>
>> This requires a simple instrumentation to work, but I was able to get
>> some results out of it to measure the size of our documents in cache.
>>
>> You can see the result on a personal server:
>>
>> Measuring one page:
>> http://www.zapnews.tv/xwiki/bin/view/Admin/MemoryUsage
>>
>> Measuring all pages in cache:
>> http://www.zapnews.tv/xwiki/bin/view/Admin/MemoryUsage?page=all
>>
>> The first results I can see, is that with no surprise the items taking
>> most memory are:
>>
>> - attachment content
>> - attachment archive
>> - archive
>>
>> What I was able to see is that as expected these fields won't consume
>> memory until we are asking for the data.
>> And after a while, the memory is indeed discarded for these fields, so
>> the usage of SoftReferences for them seem to work.
>>
>> Now what I can see is that the attachment archive can be very very
>> very costly in memory.
>> Also it does not seem clear how the memory from these fields is
>> garbage collected (a GC did not recover it).
>>
>> With some experience of massive loading of attachments that lead to
>> OutOfMemory errors in the server, I do suspect that the SoftReferences
>> are not necessarly discarded fast enough to avoid the OutOfMemory. I
>> also believe that a search engine that is walking all our pages
>> including our archive pages can genearate important memory usage that
>> could lead to problems. But this is only an intuition that needs to be
>> proved.
>>
>> I believe we need to run some testing under stress to see if the cache
>> and memory usage do behave properly and if the cache(s) are never able
>> to go over the memory usage.
>>
>> We also should try the classmexer on servers that are heavily used an
>> be able to look at the memory usage and see if we are "controling" it.
>> I'm not 100% sure how intrusive the "instrumentation" module is but I
>> believe it's quite light.
>>
>> We could try it on xwiki.org or on myxwiki.org.
>>
>> WDYT ?
>>
>> Ludovic
>>
>>
>> _______________________________________________
>> devs mailing list
>> [email protected]
>> http://lists.xwiki.org/mailman/listinfo/devs
>>    
> 
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> devs mailing list
> [email protected]
> http://lists.xwiki.org/mailman/listinfo/devs

_______________________________________________
devs mailing list
[email protected]
http://lists.xwiki.org/mailman/listinfo/devs

Reply via email to