ant elder wrote:
> Moving some of the testing discussion out of the samples thread...
> 
> Its not completely clear to me what the distinction is between 'technology
> samples' and  functional tests. There are some JavaScript samples in the
> samples directory:
> http://svn.apache.org/repos/asf/incubator/tuscany/java/samples/JavaScript/,
> which of these should be samples and which should be tests, and where should
> they fit in the Tuscany directory structure? I'm quite happy to move some or
> all of these or change them to be testcases, tell me what you'd like.
> 

To me, the purpose of a technology sample is to allow someone to learn
and understand a particular piece of technology (such as a construct in
the programming model); it is a learning/teaching aid. As such there
should be an emphasis on clearly describing the concept the sample is
illustrating which requires things like clear source code, a lack of
distracting constructs (like error handling), a simple but very
explanatory UI, and so on. The sample is most valuable distributed in
source form, perhaps with config files for different IDEs that make it
easy to view.

On the other hand, functional test of the same technology is intended to
check that the function works as advertised. It involves mechanical
testing of not just the main code paths but also of documented but
lesser used functions as well as error paths.

In many ways these are exercising similar technology but they are doing
it for different purposes - one is illustration, one is verification.

Taking the JavaScript samples, I think we should keep all of them as
samples and build them so that they clearly illustrate how SCA can be
used to build JavaScript components (including ones based on E4X) and
how it works with JavaScript UI frameworks such as Dojo using JSON-RPC.

I also think we need to increase the amount of testing done in the build
of the container.js and binding.jsonrpc modules. I recently made a
change to the model that impacted container.js but was not caught in its
test suite and was only caught by the sample. I think (and I think Jim
agrees with me) that this is a problem - things like this should be
caught by test coverage in the build and not just because it happened to
be used in the sample.

> Could there be some specific examples of how we should be doing functional
> and integration testing of things like the WS binding entryPoints and
> externalServices? It was done by running the WS samples in testing/tomcat,
> whats a better approach?
> 

There is already an integration test for the WS entryPoint in the Tomcat
module itself that is run as part of the main build. It would not be
hard to add one for externalService using a mock servlet to implement
the provider (it would be similar to the one used to test the
ModuleContext setup for the servlet environment). Given I have added
/all/ the integration tests we currently have I would appreciate it if
someone else would step up to the plate.

> I really don't understand why samples shouldn't be tested as part of the
> regular build.  What is the old ground being rehashed, the best I can find
> is the comment at the very end of this email which no one posted any
> disagreements to:
> http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/200603.mbox/[EMAIL 
> PROTECTED]
> 

There's stuff here and on other threads
http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/200601.mbox/[EMAIL 
PROTECTED]

> I'd be careful with -1'ing commits where you don't like the test coverage.
> It would be far better to offer guidance and specific constructive
> criticism, or even help add tests if you think some code is lacking. We need
> to foster an environment where people want to join in and help, throwing
> around vetos isn't going to do that, and if using vetos becomes common
> practice they will likely be used back at you when you least expect or want
> them. Everyone acknowledges the current code needs improved testing, so if
> nothing else -1s would be a bit hypocritical. Vetos are always available as
> an option of last resort, but I think they're best kept for that - a last
> resort - after attempts to resolve a problem have failed.
> 

I was proposing (and plan to start) -1 commits with NO test coverage.

We have attempted to resolve this problem through guidance and
constructive criticism. You say we all acknowledge that the current code
needs improved testing; we may have agreed we have a problem but we are
not acting to make the improvements that resolve it. Vetoing changes
from people who make the problem worse (and who are thereby acting
against what we agreed on) is IMO an appropriate use of a veto.

Hypocritical? No, hyprocrisy would be saying we need more testing but
not doing anything about it. I already committed to add testing for the
loaders - which tests are you going to add?

--
Jeremy

Reply via email to