[ 
https://issues.apache.org/jira/browse/COMPRESS-494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16999721#comment-16999721
 ] 

Peter Alfred Lee commented on COMPRESS-494:
-------------------------------------------

I looked into the code in commons-compress 1.9 and found it like this :
{code:java}
private void readDataDescriptor() throws IOException {
    readFully(WORD_BUF);
    ZipLong val = new ZipLong(WORD_BUF);
    if (ZipLong.DD_SIG.equals(val)) {
        // data descriptor with signature, skip sig
        readFully(WORD_BUF);
        val = new ZipLong(WORD_BUF);
    }
    current.entry.setCrc(val.getValue());

    // if there is a ZIP64 extra field, sizes are eight bytes
    // each, otherwise four bytes each.  Unfortunately some
    // implementations - namely Java7 - use eight bytes without
    // using a ZIP64 extra field -
    // http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7073588

    // just read 16 bytes and check whether bytes nine to twelve
    // look like one of the signatures of what could follow a data
    // descriptor (ignoring archive decryption headers for now).
    // If so, push back eight bytes and assume sizes are four
    // bytes, otherwise sizes are eight bytes each.
    readFully(TWO_DWORD_BUF);
    ZipLong potentialSig = new ZipLong(TWO_DWORD_BUF, DWORD);
    if (potentialSig.equals(ZipLong.CFH_SIG) || 
potentialSig.equals(ZipLong.LFH_SIG)) {
        pushback(TWO_DWORD_BUF, DWORD, DWORD);
        current.entry.setCompressedSize(ZipLong.getValue(TWO_DWORD_BUF));
        current.entry.setSize(ZipLong.getValue(TWO_DWORD_BUF, WORD));
    } else {
        
current.entry.setCompressedSize(ZipEightByteInteger.getLongValue(TWO_DWORD_BUF));
        current.entry.setSize(ZipEightByteInteger.getLongValue(TWO_DWORD_BUF, 
DWORD)); // line702
    }
}
{code}
It seems none of the Local File Header Signature or Central File Header 
Signature was detected. Then it goes to the work around logic for a already 
known Java7 bug. And the uncompressed size was corrupted. From what I see, I 
believe the data descriptor is corrupted or there are some other problems. 
Anyway, we could not go deeper if the zip archive is provided.

> ZipArchieveInputStream component is throwing "Invalid Entry Size"
> -----------------------------------------------------------------
>
>                 Key: COMPRESS-494
>                 URL: https://issues.apache.org/jira/browse/COMPRESS-494
>             Project: Commons Compress
>          Issue Type: Bug
>    Affects Versions: 1.8, 1.18
>            Reporter: Anvesh Mora
>            Priority: Critical
>
> I've observed in my development in certain zip files which we are able to 
> extract with with unzip utility on linux is failing with our Compress library.
>  
> As of now I've stack-trace to share, I'm gonna add more in here as on when 
> discussion begins on this:
>  
> {code:java}
> Caused by: java.lang.IllegalArgumentException: invalid entry size
>         at 
> org.apache.commons.compress.archivers.zip.ZipArchiveEntry.setSize(ZipArchiveEntry.java:550)
>         at 
> org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.readDataDescriptor(ZipArchiveI
> nputStream.java:702)
>         at 
> org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.bufferContainsSignature(ZipArc
> hiveInputStream.java:805)
>         at 
> org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.readStoredEntry(ZipArchiveInpu
> tStream.java:758)
>         at 
> org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.readStored(ZipArchiveInputStre
> am.java:407)
>         at 
> org.apache.commons.compress.archivers.zip.ZipArchiveInputStream.read(ZipArchiveInputStream.jav
> a:382)
> {code}
> I missed to add version info, below are those:
> version of lib I'm using is: 1.9
> And I did try version 1.18, issue is observed in this version too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to