[ 
https://issues.apache.org/jira/browse/TIKA-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16503280#comment-16503280
 ] 

Tim Allison edited comment on TIKA-2446 at 6/6/18 2:01 PM:
-----------------------------------------------------------

We can prevent an OOM in our code by changing how we open the package:
{noformat}
ZipEntrySource zipEntrySource = new ZipFileZipEntrySource(new 
java.util.zip.ZipFile(stream.getFile()));
OPCPackage pkg = OPCPackage.open(zipEntrySource);{noformat}

Java's ZipFile throws an IOException during detection which is swallowed, and 
then commons-compress throws an EOFException when the parser tries to parse the 
file.

I agree that we should try to fix this at the POI level, though.  In 3.17 at 
least there's a check that the length of the stream is < Integer.MAX_VALUE, 
which, as noted above is 2GB. Although I've done it before it feels hacky to 
limit this to, say 1 GB.  I wonder if POI should write out to a new temp file, 
rather than trying to buffer the defective ZIP into memory?  Maybe not 
pre-allocate the byte[]...given, as pointed out above, that by this point we 
know we're dealing with a corrupted file?


was (Author: talli...@mitre.org):
We can prevent an OOM in our code by changing how we open the package:
{noformat}
ZipEntrySource zipEntrySource = new ZipFileZipEntrySource(new 
java.util.zip.ZipFile(stream.getFile()));
OPCPackage pkg = OPCPackage.open(zipEntrySource);{noformat}

Java's ZipFile throws an IOException during detection which is swallowed, and 
then commons-compress throws an EOFException when the parser tries to parse the 
file.

I agree that we should try to fix this at the POI level, though.  In 3.17 at 
least there's a check that the length of the stream is < Integer.MAX_VALUE, 
which, as noted above is 2GB. Although I've done it before it feels hacky to 
limit this to, say 1 GB.  I wonder if POI should write out to a new temp file, 
rather than trying to buffer the defective ZIP into memory?

> Tainted Zip file can provoke OOM errors
> ---------------------------------------
>
>                 Key: TIKA-2446
>                 URL: https://issues.apache.org/jira/browse/TIKA-2446
>             Project: Tika
>          Issue Type: Bug
>    Affects Versions: 1.16
>            Reporter: Thorsten Schäfer
>            Priority: Major
>         Attachments: corrupt_zip.zip
>
>
> Hi,
> using Tika 1.16 with embedded POI 3.17-beta1 we experienced an OutOfMemory 
> error on a Zip file. The suspicious code is in the constructor of 
> FakeZipEntry in line 125. Here a ByteArrayOutputStream of up to 2 GiB in size 
> is opened which will most probably lead to an OutOfMemory. The entry size in 
> the zip file can be easily faked by an attacker.
> The code path to FakeZipEntry will be used only if the native 
> java.util.zip.ZipFile implementation already failed to open the (possibly 
> corrupted) Zip. Possibly a more fine grained error analysis could be done in 
> ZipPackage.
> I have attached a tweaked zip file that will provoke this error.
> {code:java}
> public FakeZipEntry(ZipEntry entry, InputStream inp) throws IOException {
>                       super(entry.getName());
>                       
>                       // Grab the de-compressed contents for later
>             ByteArrayOutputStream baos;
>             long entrySize = entry.getSize();
>             if (entrySize !=-1) {
>                 if (entrySize>=Integer.MAX_VALUE) {
>                     throw new IOException("ZIP entry size is too large");
>                 }
>                 baos = new ByteArrayOutputStream((int) entrySize);
>             } else {
>                       baos = new ByteArrayOutputStream();
>             }
> {code}
> Kinds,
> Thorsten



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to