Giving the improved bzip2 code a second chance?

2009-03-26 Thread Stefan Bodewig
Hi all,

with Ant 1.7.0 we changed the bzip2 code to make it a lot faster and
reverted the change in 1.7.1 because it was creating corrupt archives.

Meanwhile the Hadoop folks have been using the 1.7.0 code and claim
they have found and fixed the problem (a single + 1 missing somewhere
IIUC).

I think we have a unit test that failed with the 1.7.0 code and passes
with 1.7.1 - should we give the Hadoop fixed code a second chance if
it passes all our tests?

Stefan

-
To unsubscribe, e-mail: dev-unsubscr...@ant.apache.org
For additional commands, e-mail: dev-h...@ant.apache.org



Re: Giving the improved bzip2 code a second chance?

2009-03-26 Thread Peter Reilly
I am testing this patch at the moment.

I am running the bz2 compress with the new code, uncompessing
using the command line bunzip and comparing the
files.

The test is over ~100, 000 files, and I have had to
use the computer for other things - so the test
is not yet complete.


project name=testbz default=t
 xmlns:ac=antlib:net.sf.antcontrib 
  import file=src/ant/simple.xml/
  property name=space location=/media/disk/preilly/space/
  property name=checkstatus value=if [ ! $? == 0 ] ; then exit 1; fi/
  macrodef name=testbz
attribute name=file/
sequential
  local name=bname/
  local name=target/
  basename file=@{file} property=bname/
  property name=target location=${space}/${bname}/
  delete quiet=yes file=${target}/
  delete quiet=yes file=${target}.bz2/
  bzip2 src=@{file} destfile=${target}.bz2/
  ac:bash failonerror=true dir=${space}
bunzip2 '${bname}.bz2'
${checkstatus}
cmp '${bname}' '@{file}'
${checkstatus}
rm '${bname}'
true
  /ac:bash
/sequential
  /macrodef
  target name=t
ac:for param=file
  fileset dir=/media/disk excludes=preilly/**,**/*$*/
  ac:sequential
testbz file=@{file}/
  /ac:sequential
/ac:for
  /target
/project


It looks good at the moment.

Peter


On Thu, Mar 26, 2009 at 1:20 PM, Stefan Bodewig bode...@apache.org wrote:
 Hi all,

 with Ant 1.7.0 we changed the bzip2 code to make it a lot faster and
 reverted the change in 1.7.1 because it was creating corrupt archives.

 Meanwhile the Hadoop folks have been using the 1.7.0 code and claim
 they have found and fixed the problem (a single + 1 missing somewhere
 IIUC).

 I think we have a unit test that failed with the 1.7.0 code and passes
 with 1.7.1 - should we give the Hadoop fixed code a second chance if
 it passes all our tests?

 Stefan

 -
 To unsubscribe, e-mail: dev-unsubscr...@ant.apache.org
 For additional commands, e-mail: dev-h...@ant.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@ant.apache.org
For additional commands, e-mail: dev-h...@ant.apache.org



Re: Giving the improved bzip2 code a second chance?

2009-03-26 Thread Stefan Bodewig
On 2009-03-26, Peter Reilly peter.kitt.rei...@gmail.com wrote:

 I am running the bz2 compress with the new code, uncompessing
 using the command line bunzip and comparing the
 files.

 The test is over ~100, 000 files, and I have had to
 use the computer for other things - so the test
 is not yet complete.

Ouch, didn't know that.

Thank you for perfroming the tests!

Stefan

-
To unsubscribe, e-mail: dev-unsubscr...@ant.apache.org
For additional commands, e-mail: dev-h...@ant.apache.org