Stefan,

Thank you so much for getting this right to me. I'm glad it turned out to be
simple. The script below is a nicely declarative version of what I needed.

I'll extract the performance information from these files and let you know
what I find. I'm hard at work graphing data with Google Charts to provide a
portable format for the visualizations.

Have a good weekend.

Cheers,

Kent

-----Original Message-----
From: Stefan Bodewig [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 12, 2008 6:26 AM
To: [EMAIL PROTECTED]
Cc: general@gump.apache.org
Subject: Re: Test data wanted

On Wed, 10 Sep 2008, <[EMAIL PROTECTED]> wrote:

> Good to hear from you again. Thank you for your kind offer of help. 
> Given the time I have to give to the task and the rustiness of my Unix 
> skills, I suspect it would take as much elapsed time for me to ferret 
> out the files as it would for you to gather them.

It turned out that Ant was pretty well suited for the task,

  <zip destfile="${target}/testresults.zip">
    <fileset dir="${gump.base}" includes="**/*.xml" excludes="cvs/">
      <and>
        <readable/>
        <contains text="&lt;testsuite"/>
      </and>
    </fileset>
  </zip>

and about ten minutes of waiting was really all it took to collect all XML
files that contain "<testsuite".  There will certainly be false positives in
the zip like Ant's own AntUnit tests (<antunit> creates an XML output
similar to <junit>) but I'm sure you can extract those.

> When you get a chance, I'd appreciate it if you could put them 
> together for me in whatever format is easy for you to create.

vmgump's webserver doesn't serve home dirs anymore and I don't want to
fiddle with it right now, so I've moved the zip over to my place (about
8MB).  Grab it from <http://stefan.samaflost.de/code/testresults.zip> and
I'll delete it some time later over the next days.

> Just as an introduction, one of the graphs I created shows the number 
> of tests per test suite and the number of failures and errors per test 
> suite.

JUnit's notion of a suite and Ant's are not identical, this stems from the
way Thomas Haas and I used JUnit when we wrote the <junit> task in May/June
2000 and has carried over from there.

Both of us didn't use explicit TestSuites but only classes inheriting from
TestCase and this is Ant's expectation - a <testuite> in Ant's terms is
whatever invoking the static suite() method or automatically extracting all
test methods of a single class yield (or more recently what you get by
wrapping the test class in a JUnit4Adapter).  You get exactly one
<testsuite> for each class the TestRunner has been invoked on.

> (The x axis is roughly 2^n.) The astonishing thing for me in this case 
> is the fairly clear power law distribution in the number of failures. 
> Why in the world would that be?

You'd probably need to see how data evolves over time to really see what
happens here.  Are the few cases with many failures simply tests that are
known to fail and that get ignored over time?  Or some sort of refactoring
that left failing tests behind with no time to adapt them?

Stefan


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to