[
https://issues.apache.org/jira/browse/COMPRESS-116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12885237#action_12885237
]
Aleksey Shipilev commented on COMPRESS-116:
-------------------------------------------
Right, I see now. The proposed workaround seem to be very dangerous.
Let me think about it.
> ZipArchiveInputStream fails to uncompress stream generated by InfoZip
> ---------------------------------------------------------------------
>
> Key: COMPRESS-116
> URL: https://issues.apache.org/jira/browse/COMPRESS-116
> Project: Commons Compress
> Issue Type: Bug
> Components: Archivers
> Affects Versions: 1.0
> Environment: Linux, Info-ZIP 3.0 (July 5th 2008), Compress 1.0
> Reporter: Aleksey Shipilev
> Fix For: 1.1
>
> Attachments: sample-ordinary.zip, sample-streaming.zip,
> TestInfoZip.java
>
>
> Current Compress implementation silently fails to uncompress streaming zip
> coming from Info-ZIP.
> Should this behaviour proven to be adherent to spec, some sort of Exception
> should be thrown instead of silencing the error.
> Steps to reproduce:
> 1. Download sample-*.zip. These two files were generated with Info-ZIP.
> sample-ordinary.zip was generated with "zip -r sample-ordinary.zip temp/".
> sample-streaming.zip was generated with "zip -fd -r - temp/ >
> sample-streaming.zip". Note that "-fd" flag forces data descriptors in
> stream. This is enabled automatically if the pipe sink is not the file, e.g.
> if you call it as "zip -r - temp/ | pv > sample-streaming.zip". For
> convenience, I did it by forcing descriptors via flag.
> 2. Download, compile and run TestInfoZip.java, placing Compress on classpath.
> 3. Observe the following output:
> Reading sample-ordinary.zip
> name=temp/, size=0
> Read 0 bytes
> name=temp/test.100, size=100
> Read 100 bytes
> name=temp/test.10000, size=10000
> Read 10000 bytes
> name=temp/test.10000.zip, size=10170
> Read 10170 bytes
> name=temp/test.1M, size=1048576
> Read 1048576 bytes
> name=temp/test.10M, size=10485760
> Read 10485760 bytes
> Reading sample-streaming.zip
> name=temp/, size=0
> Read 0 bytes
> name=temp/test.100, size=-1
> Read 100 bytes
> name=temp/test.10000, size=-1
> Read 10000 bytes
> name=temp/test.10M, size=-1
> Read 10485760 bytes
> name=temp/test.10000.zip, size=-1
> Read 0 bytes
> Note that sample-ordinary is read OK, while streaming version had failed read
> on one of the entries.
> No exceptions are thrown, the implementation thinks the stream is actually
> over.
> Another finding: this reproduces perfectly when there's at least one STORED
> entry in the stream.
> Generating Info-ZIP files with maximum compression (e.g. all entries are
> DEFLATED) works around this problem.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.