Leo Simons wrote:
On Wed, Mar 22, 2006 at 07:15:28AM -0500, Geir Magnusson Jr wrote:
Pulling out of the various threads where we have been discussing, can we agree on the problem :

We have unique problems compared to other Java projects because we need to find a way to reliably test the things that are commonly expected to be a solid point of reference - namely the core class library.

Further, we've been implicitly doing "integration testing" because - so far - the only way we've been testing our code has been 'in situ' in the VM - not in an isolated test harness. To me, this turns it into an integration test.

Sure, we're using JUnit, but because of the fact we are implmenting core java.* APIs, we aren't testing with a framework that has been independently tested for correctness, like we would when testing any other code.

I hope I got that idea across - I believe that we have to go beyond normal testing approaches because we don't have a normal situation.

Where we define 'normal situation' as "running a test framework on top of
the sun jdk and expecting any bugs to not be in that jdk". There's plenty
of projects out there that have to test things without having such a
"stable reference JDK" luxury.....I imagine that testing GCC is just as
hard as this problem we have here :-)

Is it the same? We need to have a running JVM+classlibarary to test the classlibrary code.


So I think there are three things we want to do (adopting the terminology that came from the discussion with Tim and Leo ) :

1) implementation tests
2) spec/API tests (I'll bundle together)
3) integration/functional tests

I believe that for #1, the issues related to being on the bootclasspath don't matter, because we aren't testing that aspect of the classes (which is how they behave integrated w/ the VM and security system) but rather the basic internal functioning.

I'm not sure how to approach this, but I'll try. I'd love to hear how Sun, IBM or BEA deals with this, or be told why it isn't an issue :)

Implementation tests : I'd like to see us be able to do #1 via the standard same-package technique (i.e. testing a.b.C w/ a.b.CTest) but we'll run into a tangle of classloader problems, I suspect, becuase we want to be testing java.* code in a system that already has java.* code. Can anyone see a way we can do this - test the classlibrary from the integration point of view - using some test harness + any known-good JRE, like Sun's or IBM's?

Ew, that won't work in the end since we should assume our own JRE is going
to be "known-better" :-). But it might be a nice way to "bootstrap" (eg
we test with an external JRE until we satisfy the tests and then we switch
to testing with an earlier build).

Lets be clear - even using our own "earlier build" doesn't solve the problem I'm describing, because as it stands now, we don't use "earlier build" classes to test with - we use the code we want to test as the clsaslibrary for the JRE that's running the test framework.

The classes that we are testing are also the classes used by the testing framework. IOW, any of the java.* classes that JUnit itself needs (ex. java.util.HashMap) are exactly the same implementation that it's testing.

That's why I think it's subtly different than a "bootstrap and use version - 1 to test" problem. See what I mean?

I'm very open to the idea that I'm missing something here, but I'd like to know that you see the issue - that when we test, we have

  VM + "classlib to be tested" + JUnit + testcases

where the testcases are testing the classlib the VM is running JUnit with.

There never is isolation of the code being tested :

  VM + "known good classlib" + Junit + testcases

unless we have some framework where

  VM + "known good classlib" + JUnit
      + framework("classlib to be tested")
           + testcases

and it's that notion of "framework()" that I'm advocating we explore.



For code that has side effects or for which we can conceivably create
verifiable side effects (where side effect is something outside of the
whole "java environment") we can try and produce known-good input and
output. There's a variety of ways to automate things like that, for
example by using tracing on the relevant bits, manually verifying a
"known-good" trace, storing it, and comparing future runs.

But I suspect there is a whole lot of code that is either inherently all
but side-effect-free, or where testing the side effects automatically
amounts to doing an integration test.

Spec/API tests : these are, IMO, a kind of integration test, because proper spec/API behavior *is* dependent on factors beyond the actual code itself (like classloader configuration, and security context). Because of this, the *.test.* pattern makes perfect sense. Assuming we could produce something useful for #1 (i.e. a test harness/framework), could we then augment it to simulate the classloader config + security config that we'd get in a real VM? That will give us the ability to test in isolation of the VM, and also let us 'break' the environment to ensure that the code fails in a predictable way.

Intgration/functional : this is a whole range of things, from doing the Spec/API tests in an actual VM, to the tests that exercise the code through interaction with external systems (like network, RMI, GUI, etc)

***

Now, it might be suggested that we just ignore the implementation testing (#1) and just do #2 and #3 as we are now, and hope we have a good enough test suite. It could be argued that when Sun started, they didn't have a known-good platform to do implementation testing on like we do now. I don't know if that's true.

The difference is that we need to produce something of the same quality as Sun's Java 5, not Sun's Java 1.0. We've had 11 years since 1.0 to learn about testing, but they've had 11 years to get things solid.

What to do....

No idea! Cool!

We should do #2 and #3 regardless.

We're already doing #2 and #3.  The lack of #1 is what bothers me.

Identifying which-is-which (#1, #2, #3)
in all the current test suites seems like a good next step. Obviously that
doesn't really help us get that implementation testing framework you
describe but it will help more unambigously define the needs.

Further ideas...

-> look at how the native world does testing
   (hint: it usually has #ifdefs, uses perl along the way, and it is certainly
    "messy")
   -> emulate that

-> build a bigger, better specification test
   -> and somehow "prove" it is "good enough"

-> build a bigger, better integration test
   -> and somehow "prove" it is "good enough"

I'll admit my primary interest is the last one...

The problem I see with the last one is that the "parameter space" is *huge*.

I believe that your preference for the last one comes from the Monte-Carlo style approach that Gump uses - hope that your test suite has enough variance that you "push" the thing being tested through enough of the parameter space that you can be comfortable you would have exposed the bugs. Maybe.

geir

Reply via email to