[ 
https://issues.apache.org/jira/browse/COMPRESS-542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17337996#comment-17337996
 ] 

Stefan Bodewig commented on COMPRESS-542:
-----------------------------------------

As [~akelday] points out in the github PR one problem is that {{SevnZFile}} 
tries to recover from a corrupted archive and gives up an important CRC check 
that way. Of course an OOM could happen on a legitimate archive that is really 
big. For the latter case we could reuse the memory limit of the 
{{SevenZFileOption}} we currently only use when decoding streams.

One idea I'm currently pondering is if either a limit has been set or we know 
the archive is corrupt then we parse the metadata in two passes. The first pass 
would only collect enough information to guess the amount of memory we'd need 
to allocate and at the same time sanity check values - like array sizes must be 
positive, positions of other "places" must be inside of the bounds of the 
archive and so on.

This would probably slow down opening a new archive in certain cases but would 
allow us to reject corrupted archives without running out of memory - and would 
allow users to commit to a certain amount of memory. Of course we'd only be 
guessing the amount of memory we require but this sounds better than not 
providing any control at all.

> Corrupt 7z allocates huge amount of SevenZEntries
> -------------------------------------------------
>
>                 Key: COMPRESS-542
>                 URL: https://issues.apache.org/jira/browse/COMPRESS-542
>             Project: Commons Compress
>          Issue Type: Bug
>    Affects Versions: 1.20
>            Reporter: Robin Schimpf
>            Priority: Major
>         Attachments: 
> Reduced_memory_allocation_for_corrupted_7z_archives.patch, 
> endheadercorrupted.7z, endheadercorrupted2.7z
>
>          Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> We ran into a problem where a 1.43GB corrupt 7z file tried to allocate about 
> 138 million SevenZArchiveEntries which will use about 12GB of memory. Sadly 
> I'm unable to share the file. If you have enough Memory available the 
> following exception is thrown.
> {code:java}
> java.io.IOException: Start header corrupt and unable to guess end Header
>       at 
> org.apache.commons.compress.archivers.sevenz.SevenZFile.tryToLocateEndHeader(SevenZFile.java:511)
>       at 
> org.apache.commons.compress.archivers.sevenz.SevenZFile.readHeaders(SevenZFile.java:470)
>       at 
> org.apache.commons.compress.archivers.sevenz.SevenZFile.<init>(SevenZFile.java:336)
>       at 
> org.apache.commons.compress.archivers.sevenz.SevenZFile.<init>(SevenZFile.java:128)
>       at 
> org.apache.commons.compress.archivers.sevenz.SevenZFile.<init>(SevenZFile.java:369)
> {code}
> 7z itself aborts really quick when I'm trying to list the content of the file.
> {code:java}
> 7z l "corrupt.7z"
> 7-Zip 18.01 (x64) : Copyright (c) 1999-2018 Igor Pavlov : 2018-01-28
> Scanning the drive for archives:
> 1 file, 1537752212 bytes (1467 MiB)
> Listing archive: corrupt.7z
> ERROR: corrupt.7z : corrupt.7z
> Open ERROR: Can not open the file as [7z] archive
> ERRORS:
> Is not archive
> Errors: 1
> {code}
> I hacked together the attached patch which will reduce the memory allocation 
> to about 1GB. So lazy instantiation of the entries could be a good solution 
> to the problem. Optimal would be to only create the entries if the headers 
> could be parsed correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to