[jira] [Commented] (COMPRESS-569) OutOfMemoryError on a crafted tar file

2021-03-07 Thread Stefan Bodewig (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296819#comment-17296819
 ] 

Stefan Bodewig commented on COMPRESS-569:
-

Thank you. I've added the test in 5aa752ab

> OutOfMemoryError on a crafted tar file
> --
>
> Key: COMPRESS-569
> URL: https://issues.apache.org/jira/browse/COMPRESS-569
> Project: Commons Compress
>  Issue Type: Bug
>Affects Versions: 1.21
>Reporter: Fabian Meumertzheim
>Priority: Blocker
> Attachments: TarFileTimeout.java, timeout.tar
>
>
> Apache Commons Compress at commit
> https://github.com/apache/commons-compress/commit/1b7528fbd6295a3958daf1b1114621ee5e40e83c
>  throws an OutOfMemoryError after consuming ~5 minutes of CPU on my
> machine on a crafted tar archive that is less than a KiB in size:
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.base/sun.nio.cs.UTF_8.newDecoder(UTF_8.java:70)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.newDecoder(NioZipEncoding.java:182)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.decode(NioZipEncoding.java:135)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:311)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:275)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.parseTarHeader(TarArchiveEntry.java:1550)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:554)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:570)
> at 
> org.apache.commons.compress.archivers.tar.TarFile.getNextTarEntry(TarFile.java:250)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:211)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:94)
> at TarFileTimeout.main(TarFileTimeout.java:22)
> I attached both the tar file and a Java reproducer for this issue.
> Citing Stefan Bodewig's analysis of this issue:
> Your archive contains an entry with a claimed size of -512 bytes. When
> TarFile reads entries it tries to skip the content of th entry as it is
> only interested in meta data on a first scan and does so by positioning
> the input stream right after the data of the current entry. In this case
> it positions it 512 bytes backwards right at the start of the current
> entry's meta data again. This leads to an infinite loop that reads a new
> entry, stores the meta data, repositions the stream and starts over
> again. Over time the list of collected meta data eats up all available
> memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (COMPRESS-569) OutOfMemoryError on a crafted tar file

2021-03-06 Thread Fabian Meumertzheim (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296664#comment-17296664
 ] 

Fabian Meumertzheim commented on COMPRESS-569:
--

[~bodewig] Yes, feel free to use the archive and reproducer under any license 
you want. I will let the fuzzer run on the fixed version 
(https://github.com/apache/commons-compress/commit/8543b030e93fa71b6093ac7d4cdb8c4e98bfd63d)
 and report back if I should get any more findings.

> OutOfMemoryError on a crafted tar file
> --
>
> Key: COMPRESS-569
> URL: https://issues.apache.org/jira/browse/COMPRESS-569
> Project: Commons Compress
>  Issue Type: Bug
>Affects Versions: 1.21
>Reporter: Fabian Meumertzheim
>Priority: Blocker
> Attachments: TarFileTimeout.java, timeout.tar
>
>
> Apache Commons Compress at commit
> https://github.com/apache/commons-compress/commit/1b7528fbd6295a3958daf1b1114621ee5e40e83c
>  throws an OutOfMemoryError after consuming ~5 minutes of CPU on my
> machine on a crafted tar archive that is less than a KiB in size:
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.base/sun.nio.cs.UTF_8.newDecoder(UTF_8.java:70)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.newDecoder(NioZipEncoding.java:182)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.decode(NioZipEncoding.java:135)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:311)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:275)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.parseTarHeader(TarArchiveEntry.java:1550)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:554)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:570)
> at 
> org.apache.commons.compress.archivers.tar.TarFile.getNextTarEntry(TarFile.java:250)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:211)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:94)
> at TarFileTimeout.main(TarFileTimeout.java:22)
> I attached both the tar file and a Java reproducer for this issue.
> Citing Stefan Bodewig's analysis of this issue:
> Your archive contains an entry with a claimed size of -512 bytes. When
> TarFile reads entries it tries to skip the content of th entry as it is
> only interested in meta data on a first scan and does so by positioning
> the input stream right after the data of the current entry. In this case
> it positions it 512 bytes backwards right at the start of the current
> entry's meta data again. This leads to an infinite loop that reads a new
> entry, stores the meta data, repositions the stream and starts over
> again. Over time the list of collected meta data eats up all available
> memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (COMPRESS-569) OutOfMemoryError on a crafted tar file

2021-03-06 Thread Stefan Bodewig (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296633#comment-17296633
 ] 

Stefan Bodewig commented on COMPRESS-569:
-

your test now quickly throws an {{IOException}} with commit 5c5f8a89

Interestingly for dump archives some of our valid test cases seem to contain 
negative sizes, I'll have to look into this more closely.

> OutOfMemoryError on a crafted tar file
> --
>
> Key: COMPRESS-569
> URL: https://issues.apache.org/jira/browse/COMPRESS-569
> Project: Commons Compress
>  Issue Type: Bug
>Affects Versions: 1.21
>Reporter: Fabian Meumertzheim
>Priority: Blocker
> Attachments: TarFileTimeout.java, timeout.tar
>
>
> Apache Commons Compress at commit
> https://github.com/apache/commons-compress/commit/1b7528fbd6295a3958daf1b1114621ee5e40e83c
>  throws an OutOfMemoryError after consuming ~5 minutes of CPU on my
> machine on a crafted tar archive that is less than a KiB in size:
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.base/sun.nio.cs.UTF_8.newDecoder(UTF_8.java:70)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.newDecoder(NioZipEncoding.java:182)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.decode(NioZipEncoding.java:135)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:311)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:275)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.parseTarHeader(TarArchiveEntry.java:1550)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:554)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:570)
> at 
> org.apache.commons.compress.archivers.tar.TarFile.getNextTarEntry(TarFile.java:250)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:211)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:94)
> at TarFileTimeout.main(TarFileTimeout.java:22)
> I attached both the tar file and a Java reproducer for this issue.
> Citing Stefan Bodewig's analysis of this issue:
> Your archive contains an entry with a claimed size of -512 bytes. When
> TarFile reads entries it tries to skip the content of th entry as it is
> only interested in meta data on a first scan and does so by positioning
> the input stream right after the data of the current entry. In this case
> it positions it 512 bytes backwards right at the start of the current
> entry's meta data again. This leads to an infinite loop that reads a new
> entry, stores the meta data, repositions the stream and starts over
> again. Over time the list of collected meta data eats up all available
> memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (COMPRESS-569) OutOfMemoryError on a crafted tar file

2021-03-06 Thread Stefan Bodewig (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296614#comment-17296614
 ] 

Stefan Bodewig commented on COMPRESS-569:
-

[~Meumertzheim] can I add your tar archive as a test case to our source tree?

> OutOfMemoryError on a crafted tar file
> --
>
> Key: COMPRESS-569
> URL: https://issues.apache.org/jira/browse/COMPRESS-569
> Project: Commons Compress
>  Issue Type: Bug
>Affects Versions: 1.21
>Reporter: Fabian Meumertzheim
>Priority: Blocker
> Attachments: TarFileTimeout.java, timeout.tar
>
>
> Apache Commons Compress at commit
> https://github.com/apache/commons-compress/commit/1b7528fbd6295a3958daf1b1114621ee5e40e83c
>  throws an OutOfMemoryError after consuming ~5 minutes of CPU on my
> machine on a crafted tar archive that is less than a KiB in size:
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.base/sun.nio.cs.UTF_8.newDecoder(UTF_8.java:70)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.newDecoder(NioZipEncoding.java:182)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.decode(NioZipEncoding.java:135)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:311)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:275)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.parseTarHeader(TarArchiveEntry.java:1550)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:554)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:570)
> at 
> org.apache.commons.compress.archivers.tar.TarFile.getNextTarEntry(TarFile.java:250)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:211)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:94)
> at TarFileTimeout.main(TarFileTimeout.java:22)
> I attached both the tar file and a Java reproducer for this issue.
> Citing Stefan Bodewig's analysis of this issue:
> Your archive contains an entry with a claimed size of -512 bytes. When
> TarFile reads entries it tries to skip the content of th entry as it is
> only interested in meta data on a first scan and does so by positioning
> the input stream right after the data of the current entry. In this case
> it positions it 512 bytes backwards right at the start of the current
> entry's meta data again. This leads to an infinite loop that reads a new
> entry, stores the meta data, repositions the stream and starts over
> again. Over time the list of collected meta data eats up all available
> memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (COMPRESS-569) OutOfMemoryError on a crafted tar file

2021-03-06 Thread Stefan Bodewig (Jira)


[ 
https://issues.apache.org/jira/browse/COMPRESS-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17296604#comment-17296604
 ] 

Stefan Bodewig commented on COMPRESS-569:
-

I see two issues here:

* we should never accept a negative size for a {{TarArchiveEntry}} we read. 
Instead we should throw an exception because of a broken archive. I'll take the 
opportunity to verify we do deal with negative sizes (and potentially other 
numeric values that are supposed to fall into a specific range) for the other 
archive types as well.
* we should check we are not moving backwards in {{TarFile}} - this one can be 
fixed quickly.

> OutOfMemoryError on a crafted tar file
> --
>
> Key: COMPRESS-569
> URL: https://issues.apache.org/jira/browse/COMPRESS-569
> Project: Commons Compress
>  Issue Type: Bug
>Affects Versions: 1.21
>Reporter: Fabian Meumertzheim
>Priority: Blocker
> Attachments: TarFileTimeout.java, timeout.tar
>
>
> Apache Commons Compress at commit
> https://github.com/apache/commons-compress/commit/1b7528fbd6295a3958daf1b1114621ee5e40e83c
>  throws an OutOfMemoryError after consuming ~5 minutes of CPU on my
> machine on a crafted tar archive that is less than a KiB in size:
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.base/sun.nio.cs.UTF_8.newDecoder(UTF_8.java:70)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.newDecoder(NioZipEncoding.java:182)
> at 
> org.apache.commons.compress.archivers.zip.NioZipEncoding.decode(NioZipEncoding.java:135)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:311)
> at 
> org.apache.commons.compress.archivers.tar.TarUtils.parseName(TarUtils.java:275)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.parseTarHeader(TarArchiveEntry.java:1550)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:554)
> at 
> org.apache.commons.compress.archivers.tar.TarArchiveEntry.(TarArchiveEntry.java:570)
> at 
> org.apache.commons.compress.archivers.tar.TarFile.getNextTarEntry(TarFile.java:250)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:211)
> at org.apache.commons.compress.archivers.tar.TarFile.(TarFile.java:94)
> at TarFileTimeout.main(TarFileTimeout.java:22)
> I attached both the tar file and a Java reproducer for this issue.
> Citing Stefan Bodewig's analysis of this issue:
> Your archive contains an entry with a claimed size of -512 bytes. When
> TarFile reads entries it tries to skip the content of th entry as it is
> only interested in meta data on a first scan and does so by positioning
> the input stream right after the data of the current entry. In this case
> it positions it 512 bytes backwards right at the start of the current
> entry's meta data again. This leads to an infinite loop that reads a new
> entry, stores the meta data, repositions the stream and starts over
> again. Over time the list of collected meta data eats up all available
> memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)