[ 
https://issues.apache.org/jira/browse/TIKA-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved TIKA-216.
--------------------------------

       Resolution: Fixed
    Fix Version/s: 0.4
         Assignee: Jukka Zitting

There is now a SecureContentHandler decorator class that implements a simple 
zip bomb prevention heuristic. By default the class throws an exception for any 
input documents that produce over 100 output characters per input byte, a 
compression ratio that no normal document is expected to reach. There is a 
default threshold of 1M output characters after which the zip bomb detection 
gets activated. This threshold avoids false positives for otherwise normal 
documents that for some reason start with a sequence of highly compressible 
data.

The SecureContentHandler decorator class is used together with the 
CountingInputStream from Commons IO. See below for sample usage:

    CountingInputStream count = new CountingInputStream(stream);
    SecureContentHandler secure = new SecureContentHandler(handler, count);
    try {
        parser.parse(count, secure, metadata);
    } catch (SAXException e) {
        secure.throwIfCauseOf(e);
        throw e;
    }

I added this to the AutoDetectParser, so all clients that use AutoDetectParser 
or tools based on that are automatically protected against simple zip bombs.



> Zip bomb prevention
> -------------------
>
>                 Key: TIKA-216
>                 URL: https://issues.apache.org/jira/browse/TIKA-216
>             Project: Tika
>          Issue Type: New Feature
>          Components: parser
>            Reporter: Jukka Zitting
>            Assignee: Jukka Zitting
>             Fix For: 0.4
>
>
> It would be good to have a mechanism that automatically detects a "zip bomb", 
> i.e. a compressed document that expands to excessive amounts of extracted 
> text. The classic example is the 42.zip file that's just 42kB in size, but 
> expands to about 4 *petabytes* when all layers are fully uncompressed.
> A simple preventive measure could be a Parser decorator that counts the 
> number of input bytes and the output characters, and fails with a 
> TikaException when the ratio exceeds some configurable limit.
> As another preventive measure, the decorator could also keep track of the 
> time (and perhaps even memory, if possible) it takes to process the input 
> document. A TikaException would be thrown if processing time exceeds some 
> configurable limit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to