[ 
https://issues.apache.org/jira/browse/COMPRESS-343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15197992#comment-15197992
 ] 

ASF GitHub Bot commented on COMPRESS-343:
-----------------------------------------

Github user rpreissel commented on the pull request:

    https://github.com/apache/commons-compress/pull/11#issuecomment-197507339
  
    This is a memory issue and from my point of view there is no simple way to 
test it with an unit test.
    
    I have tried to write an unit test and used 
ManagementFactory.getMemoryMXBean().getNonHeapMemoryUsage() to check the memory 
usage before and after creating an archive with many files. But there was no 
difference with the patch.
    
    But if I look at the process with system utils I see that the version 
without the patch requires many GB and the version with the patch only 100MB. 
    
    Any ideas?  



> Native Memory Leak in Sevenz-DeflateDecoder
> -------------------------------------------
>
>                 Key: COMPRESS-343
>                 URL: https://issues.apache.org/jira/browse/COMPRESS-343
>             Project: Commons Compress
>          Issue Type: Bug
>          Components: Archivers
>    Affects Versions: 1.10
>            Reporter: Rene Preissel
>         Attachments: COMPRESS-343.patch
>
>
> The class ...sevenz.Coders.DeflateDecoder does not close (end()) the Deflater 
> and Inflater. This can lead to native memory issues: see 
> https://bugs.openjdk.java.net/browse/JDK-8074108.
> In our case we create a zip archive with >100000 files. The Java heap is 
> around 300MB (with 2GB max). The native memory is increasing to 8GB and 
> above. Because the Java heap has no pressure - no GC is triggered, therefore 
> the Deflaters are not collected and the native memory is not freed.  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to