[ 
http://jira.magnolia-cms.com/browse/MAGNOLIA-2677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philipp Bärfuss updated MAGNOLIA-2677:
--------------------------------------

    Fix Version/s: 4.4.x

Going to use a threshold caching stream which will serve bigger files from the 
repository

> Caching big content may cause OutOfMemoryError
> ----------------------------------------------
>
>                 Key: MAGNOLIA-2677
>                 URL: http://jira.magnolia-cms.com/browse/MAGNOLIA-2677
>             Project: Magnolia
>          Issue Type: Improvement
>          Components: cache
>    Affects Versions: 4.0.1
>         Environment: Magnolia 3.6.3 CE
> Environment1: Sun JDK 1.6.0_11, 32bit, RHEL 5.3Beta
> Environment2: Sun JDK 1.6.0_03,32bit, Windows 2000
>            Reporter: Henryk Paluch
>            Assignee: Philipp Bärfuss
>             Fix For: 4.4.x
>
>
> Serving big binary node data (>100MB)  may cause OOM error when server 
> doesn't have enough of the memory assigned. This issue is due to the nature 
> of objects in EhCache - cached objects are required to be Java objects, which 
> are then serialized and kept in memory or in file system by the cache itself. 
> One possible solution would be simply storing such documents in file system 
> and having File object as a part of the cachedPage, however this would 
> introduce need for extra file store (probably in same location as a cache 
> itself). 
> Since overhead accessing the repository and serving big data streams directly 
> is relatively small to the total time it takes to stream the document to the 
> client, the simplest solution is altering a cache policy to not cache such 
> documents. This have been dealt with for DMS documents in MGNLDMS-159. For 
> the content in website workspace, it is not recommended to store big binary 
> data directly there but to use DMS instead.
> NOTE: this error occurs on Public instances only, whose has caching enabled 
> and heap size is small (relative to size of data served)!
> How to reproduce:
> 1) Start PUBLIC magnolia instance with Heap smaller than DMS repository size, 
> for example -Xmx256m
> 2) Upload on PUBLIC instance few large files (for example 3 times 100MB PDF 
> files)
> 3) Launch new anonymous browser (to ensure, that cache is used)
> 4) Download (do not interrupt) the 3 large 100MB large files from public 
> instance
> 5) Usually the 2nd download will cause
> java.lang.OutOfMemoryError: Java heap space
>       at java.util.Arrays.copyOf(Arrays.java:2786)
>       at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:71)
>       at 
> info.magnolia.module.cache.filter.SimpleServletOutputStream.write(SimpleServletOutputStream.java:53)
> (Full stacktrace shall be attached later)
> Woraround: add voter to cache policy to avoid caching of big data or disable 
> caching

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://jira.magnolia-cms.com/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




----------------------------------------------------------------
For list details see
http://www.magnolia-cms.com/home/community/mailing-lists.html
To unsubscribe, E-mail to: <[email protected]>
----------------------------------------------------------------

Reply via email to