I was pointing out the reasoning behind (which is what Rick asked)
which should be fairly well known given the number of times we have
had this particular discussion.
IMO we need integration tests that pass. That would include testing
the samples. Given the resources required to run real integration
tests it is impractical to run them as part of a developer build.
They should run on a ongoing basis, perhaps using a framework like
Continuum.
A plea to everyone - let's not rehash this all over again unless
something has changed to make it a productive discussion.
--
Jeremy
On Oct 9, 2006, at 7:28 AM, Andy Piper wrote:
At 15:21 09/10/2006, Jeremy Boynes wrote:
In many cases building the sample does not actually prove anything as
they are not executed. This applies, for example, to the webapp-based
samples we have. When they are executed, we still don't know that
they run in the end-user environment - e.g. the standalone samples
that run from SCATestCase but which fail to run from the launcher.
Where they should be built/run is as part of an integration test
suite. We don't have that ATM.
Better samples that build and don't run than samples that don't
build at all IMO.
andy
______________________________________________________________________
_
Notice: This email message, together with any attachments, may
contain
information of BEA Systems, Inc., its subsidiaries and
affiliated
entities, that may be confidential, proprietary, copyrighted
and/or
legally privileged, and is intended solely for the use of the
individual
or entity named in this message. If you are not the intended
recipient,
and have received this message in error, please immediately return
this
by email and then delete it.
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]