https://issues.apache.org/bugzilla/show_bug.cgi?id=45396
Stefan Bodewig <[EMAIL PROTECTED]> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |RESOLVED
Resolution| |FIXED
--- Comment #4 from Stefan Bodewig <[EMAIL PROTECTED]> 2008-07-16 06:06:07 PST
---
same machine svn revision 677272:
==> Benchmarking big files
Apache write warmup done
Apache write: 3407 [ms]
JDK write warmup done
JDK write: 3297 [ms]
Apache read warmup done
Apache read: 422 [ms]
JDK Warmup done
JDK read: 125 [ms]
==> Benchmarking small files
Apache write warmup done
Apache write: 4438 [ms]
JDK write warmup done
JDK write: 6563 [ms]
Apache read warmup done
Apache read: 1844 [ms]
JDK Warmup done
JDK read: 1359 [ms]
Deflater seems to copy its input around since I can see bigger memory
consumption during the Ant code tests. There is no hint in the Javadocs and I
have no idea why chunking the original input should help - other than that it
helps the native implementation of Sun's Deflater class.
I've searched through the zlib and InfoZIP code base to find any reference to
good byte chunk sizes to pass to the compression library and found that
InfoZIP's zip will use between 2kB (SMALL_MEM) and 16 kB (LARGE_MEM). I've
changed the code to use 8kB blocks, which has the side effect of doing nothing
when ZipOutputStream is used via <zip> and friends.
Ant's tasks have always read the file content in 8kB chunks and written those
blocks to the ZipOutputStream - so Ant's tasks have never seen the poor
performance for big files.
--
Configure bugmail: https://issues.apache.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.