On Dec 1, 2011, at 12:42 PM, Roman Muntyanu wrote:

> Hello Vincent, 
> 
>  I hope that I understand the issue correctly: some elements that we put into 
> cache are up to 27M in size and if e.g. we have cache size configured to 500 
> elements - this might eat up to 13GB of RAM, causing OOM. The issue will be 
> resolved by reducing the size of elements from 27M to a lot less (e.g. 500K). 

Our document cache is controlled by # of docs not by size.

So even in XE 3.1 if you put a doc with some content of, let's say 2GB and you 
have less memory you'll get an OOM… ;)

To get an XDOM (that's the object cached) of 27MB you need a document content 
of about 300K in text.

>  Even though the issue will be resolved reducing the elements sizes for now, 
> it might always come back because you cannot always control the size of 
> elements put in cache. 
>  So the issue must as well be addressed from the other side: that is limit 
> the cache size not by number of elements but by the amount of memory 
> allocated to it (which seems even more logical). Just like in EHCache since 
> one of the recent versions ( 
> http://ehcache.org/documentation/configuration/cache-size#list-of-cache-configuration-attributes
>  ).

Yes we need to investigate this but I really have no clue how the caches can do 
this in any performant manner. Guava also has some caches that do this. For 
example:

http://code.google.com/p/concurrentlinkedhashmap/wiki/ExampleUsage

Weigher<V> memoryUsageWeigher = new Weigher<V>() {
  final MemoryMeter meter = new MemoryMeter();

  @Override public int weightOf(V value) {
    long bytes = meter.measure(value);
    return (int) Math.min(bytes, Integer.MAX_VALUE);
  }
};
ConcurrentMap<K, V> cache = new ConcurrentLinkedHashMap.Builder<K, V>()
    .maximumWeightedCapacity(1024 * 1024) // 1 MB
    .weigher(memoryUsageWeigher)
    .build();

We need to check performances.

Thanks
-Vincent

>  Regards,
> Roman
> 
> -----Original Message-----
> From: [email protected] [mailto:[email protected]] On Behalf Of 
> Vincent Massol
> Sent: Thursday, December 01, 2011 13:01 PM
> To: XWiki Users; XWiki Developers
> Subject: [xwiki-devs] [ANN] Memory issue with XWiki Enterprise 3.2 and 
> 3.3M1/3.3M2
> 
> Hi everyone,
> 
> We've just found a memory issue in XWiki Enterprise 3.2 and 3.3M1/3.3M2.
> 
> The technical reason is because we're caching the rendering of pages in the 
> document cache and those objects are very big (up to 27MB for a very large 
> page). In XWiki Enterprise 3.2.1 and XWiki Enterprise 3.3 RC1+ (not released 
> yet) we've optimized the space taken in memory by those objects. 
> 
> As a consequence we recommend using a reduced document cache size with XWiki 
> Enterprise 3.2 and 3.3M1/3.3M2 and/or increasing the memory allocated to the 
> JVM.
> 
> If you haven't already upgraded to XWiki Enterprise 3.2 we recommend waiting 
> for 3.2.1 which should be released soon (or wait for XWiki Enterprise 3.3 
> which is planned for mi-december).
> 
> We apologize about this,
> -Vincent on behalf of the XWiki Dev Team
_______________________________________________
devs mailing list
[email protected]
http://lists.xwiki.org/mailman/listinfo/devs

Reply via email to