On Wed, 11 Jan 2006, Katie Capps Parlante wrote:
* PreCheckInTests -- tests that developers should run before checking in
changes
* UnitTests -- automated tests that are run by the build system.
Developers are responsible for adding unit tests for the components they
develop. For every component in the system there should be corresponding
unit tests that can be run just to validate the functionality of that
component.
* FunctionalTests -- complete set of manual and automated tests to test
the functionality and performance of each feature in the release. These
testcases will match one on one with the testcases in the test
specification. Currently we have very limited number of automated tests
for Chandler and any hellp from the user community will be greatly
appreciated. If you have any expertise in writing automated tests for
desktop UI applications and would like to contribute to Chandler test
development, please contact us.
* Performance tests -- performance testing may be conducted as part of
functional tests to test the application startup time, response times,
memory leak, CPU utilization, etc. The performance criteria will be
developed by the Product/Design team for each release.
* Integration tests -- test cases cases that test the application
completely from end to end, after all functional components are code
complete. This includes test cases with more complex scenarios than
functional tests.
* Regression tests -- subset of automated functional tests that will be
run nightly during the development cycle to ensure no existing
functionality was broken because of new feature development.
* AcceptanceTests -- tests that anyone can run in order to "bless" a
milestone/release. This is a more extensive list of manual tests that
are conducted at the end of each milestone/release.
Can we agree to use this terminology when talking about tests? What about
"PreCheckInTests"? Thoughts?
If you're implying that all these words are mutually exclusive then I
disagree with the terminology. I think that PreCheckInTests include all tests
above except:
- Performance tests: these tests are inherently squeeshy because
performance requirements tend to clash with other requirements. Getting
correct functionality and stability 'out there' is more important than
'acceptable performance' at first except in very obvious buggy cases.
Holding up a check-in because it slows down performance is a guarantee to
paralysis. What if Alec's recent calendar improvements instead of yield
a 1% improvement (I quote) had yielded a 5% worsening ?
- Integration tests: these can be very onerous to run, I don't think that
our testing framework is up to the task at this point to run them as a
regular course of everyday development work.
- ditto about acceptance tests.
In other words, PreCheckinTests include all tests verifying the absence of
new breakage by the impending check-in, where breakage is the determined by a
stack trace, crash or otherwise disabling of a previously functioning feature.
Andi..
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "Dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/dev