Hi,

can I please get a review for the following small fix:

http://cr.openjdk.java.net/~simonis/webrevs/2020/8240333/
https://bugs.openjdk.java.net/browse/JDK-8240333

The "jmod" tool needs to update .jar files as well as .jmod files when
adding hashes to the module-info file. This is handled in the method
jdk.tools.jmod.JmodTask.updateModularJar() for modular jar files and
in the methods jdk.tools.jmod.JmodTask.updateJmodFile() and
jdk.tools.jmod.JmodOutputStream.writeEntry() for .jmod files.

Unfortunately, both implementations are incorrect because they are
writing the JarEntry/ZipEntry entries which they reading from the
ZipInputStream verbatim to the ZipOutputStream. This approach is not
correct, because Zip/Entry/JarEntry contains both, the compressed and
uncompressed size of the corresponding entry. But the original
Deflater which initially compressed that entry can be either different
from the current one or it might have used a different compression
level compared to the current Deflater which re-deflates the entry
data.

When the newly created entry is closed, the new compressed size will
be checked against the original compressed size if that was recorded
in the ZipEntry/JarEntry entry and an exception will be thrown, when
they differ. For modular jar files it looks as follows:

$ jmod hash --module-path . --hash-modules .*
Error: java.util.zip.ZipException: invalid entry compressed size
(expected 58 but got 57 bytes)
java.io.UncheckedIOException: java.util.zip.ZipException: invalid
entry compressed size (expected 58 but got 57 bytes)
at 
jdk.jlink/jdk.tools.jmod.JmodTask$Hasher.lambda$updateModularJar$3(JmodTask.java:995)
at java.base/java.util.zip.ZipFile$EntrySpliterator.tryAdvance(ZipFile.java:571)
at java.base/java.util.Spliterator.forEachRemaining(Spliterator.java:326)
at 
java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)
at jdk.jlink/jdk.tools.jmod.JmodTask$Hasher.updateModularJar(JmodTask.java:980)
at jdk.jlink/jdk.tools.jmod.JmodTask$Hasher.updateModuleInfo(JmodTask.java:957)
at jdk.jlink/jdk.tools.jmod.JmodTask.lambda$hashModules$3(JmodTask.java:300)
at java.base/java.util.HashMap.forEach(HashMap.java:1425)
at jdk.jlink/jdk.tools.jmod.JmodTask.hashModules(JmodTask.java:291)
at jdk.jlink/jdk.tools.jmod.JmodTask.run(JmodTask.java:220)
at jdk.jlink/jdk.tools.jmod.Main.main(Main.java:34)
Suppressed: java.util.zip.ZipException: invalid entry compressed size
(expected 58 but got 57 bytes)
at java.base/java.util.zip.ZipOutputStream.closeEntry(ZipOutputStream.java:267)
at java.base/java.util.zip.ZipOutputStream.finish(ZipOutputStream.java:360)
at 
java.base/java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:237)
at java.base/java.util.zip.ZipOutputStream.close(ZipOutputStream.java:377)
at jdk.jlink/jdk.tools.jmod.JmodTask$Hasher.updateModularJar(JmodTask.java:976)
... 6 more
Caused by: java.util.zip.ZipException: invalid entry compressed size
(expected 58 but got 57 bytes)
at java.base/java.util.zip.ZipOutputStream.closeEntry(ZipOutputStream.java:267)
at 
jdk.jlink/jdk.tools.jmod.JmodTask$Hasher.lambda$updateModularJar$3(JmodTask.java:992)
... 10 more

For jmod files it looks like this:

$ jmod hash --module-path . --hash-modules .*
Error: java.util.zip.ZipException: invalid entry compressed size
(expected 383 but got 377 bytes)
java.io.UncheckedIOException: java.util.zip.ZipException: invalid
entry compressed size (expected 383 but got 377 bytes)
at 
jdk.jlink/jdk.tools.jmod.JmodTask$Hasher.lambda$updateJmodFile$4(JmodTask.java:1021)
at 
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at 
java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at java.base/java.util.zip.ZipFile$EntrySpliterator.tryAdvance(ZipFile.java:571)
at java.base/java.util.Spliterator.forEachRemaining(Spliterator.java:326)
at 
java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at 
java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
at 
java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at 
java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at 
java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
at jdk.jlink/jdk.tools.jmod.JmodTask$Hasher.updateJmodFile(JmodTask.java:1009)
at jdk.jlink/jdk.tools.jmod.JmodTask$Hasher.updateModuleInfo(JmodTask.java:955)
at jdk.jlink/jdk.tools.jmod.JmodTask.lambda$hashModules$3(JmodTask.java:300)
at java.base/java.util.HashMap.forEach(HashMap.java:1425)
at jdk.jlink/jdk.tools.jmod.JmodTask.hashModules(JmodTask.java:291)
at jdk.jlink/jdk.tools.jmod.JmodTask.run(JmodTask.java:220)
at jdk.jlink/jdk.tools.jmod.Main.main(Main.java:34)
Suppressed: java.util.zip.ZipException: invalid entry compressed size
(expected 383 but got 377 bytes)
at java.base/java.util.zip.ZipOutputStream.closeEntry(ZipOutputStream.java:267)
at java.base/java.util.zip.ZipOutputStream.finish(ZipOutputStream.java:360)
at 
java.base/java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:237)
at java.base/java.util.zip.ZipOutputStream.close(ZipOutputStream.java:377)
at jdk.jlink/jdk.tools.jmod.JmodOutputStream.close(JmodOutputStream.java:114)
at jdk.jlink/jdk.tools.jmod.JmodTask$Hasher.updateJmodFile(JmodTask.java:1006)
... 6 more
Caused by: java.util.zip.ZipException: invalid entry compressed size
(expected 383 but got 377 bytes)
at java.base/java.util.zip.ZipOutputStream.closeEntry(ZipOutputStream.java:267)
at 
jdk.jlink/jdk.tools.jmod.JmodOutputStream.writeEntry(JmodOutputStream.java:97)
at 
jdk.jlink/jdk.tools.jmod.JmodTask$Hasher.lambda$updateJmodFile$4(JmodTask.java:1018)
... 17 more

The errors can be provoked by creating the initial jar or jmod files
with a different zlib implementation (e.g.
LD_LIBRARY_PATH=/ziptests/build/zlib-cloudflare jar -cf ../mb_cf.jar
*)

The fix is simple. Instead of copying the ZipEntry/JarEntry entries
verbatim to the output stream, simply write newly created ZipEntry
entries to the output stream which only contain the required
attributes. This is also the way how this is done by the jar tool,
when it updates jar files.

For the modular jar file case, creating a new entry is not feasible,
because at that specific location we are dealing with JerEntry entries
which can contain additional information. Unfortunately, it is not
possible to set this information on a newly created JarEntry entry.
But we can still reset the "compressedSize" of the JarEntry to "-1"
which also prevents the additional check after the entry has been
written.

Thank you and best regards,
Volker

PS: and by the way, this is the last bug related to the wrong usage of
ZipOutputSream.putNextEntry() within the OpenJDK :) I've scanned all
the sources and couldn't find any other suspicious places.

Reply via email to