On 28 February 2011 09:36, Peter <[email protected]> wrote:

> If the test uses only the public api then it would make sense to keep the
> test separate from the module, eg a jini platform compliance test, or in
> this case a javaspace performance test, perhaps maintained in its own module
> with other javaspace tests, it should be external to the outrigger module,
> so it can be tested against both implementations.
>
> Unit tests or performance tests that are implementation specific should
> remain with the module.
>
>
Does that mean they ship as part of the deployment .jar?


> We have a number of tests that test either the platform or the service for
> compliance.  This includes the join and discovery compliance tests that are
> currently unmaintained.  These should be mainained separately from other
> modules.
>
> So to sum up, you'd first develop a performance test using the qa harness
> module as a lib dependency, this performance test could be bundled with
> other performance tests for javaspace, these would be run without
> recompillation, against the new module implementation.
>
> The mechanics of implemeting that have not been done, so this is
> hypothetical.  I wonder what thoughts others might have about how it might
> be implemented / structured?
>
>

I'm not yet ready to buy modules as being a decent divide point for tests.
Tests are typically associated with packages, not modules IMHO.

Feels like an impedance mismatch or maybe I'm not liking how much the
build/deploy/package mechanism is a factor in a discussion about testing.


> Peter.
>
> >
> > The issue I'm trying to understand is not the release of production
> > code, but the issue of how to organize benchmarks that need to be run in
> > order to decide whether to make a change. Depending on the benchmark
> > result, the change may not happen at all.
> >
> > Patricia
>
>

Reply via email to