On Jun 7, 2007, at 9:40 PM, Mikeal Rogers wrote:
I still think that including this meta data in each individual test makes it harder to discover what tests are being excluded - you now would have to search all the test files to find this information out instead of opening a single file in an editor and just looking.I can write you a piece of code in rt.py that tells you all the excluded or available tests if you want.
rt.py is not what I'm talking about - writing code to do that is easy and that's not the point. The point is that Dan and myself feel that having a single file to maintain, which is how it's done for Functional Tests, is the simplest way to do it. Engineering a more complicated way to do it now doesn't make sense to me.
I think this boils down to having a meta data feature that saves time for the test creator but that isn't the person who needs to deal with that piece of meta data at all. It's Dan and myself (or others in QA/Build Release) who have to turn off tests on demand. I just think that not having it in a central file location makes that task harder.If you're turning off a test, or you've fixed a test, you know which one it is and editing that file is trivial. Knowing which tests are being excluded and available is a bit harder but can be attained easily in code and I'm happy to write anything that will help you in this area
Like I said above, writing code to solve it is not the point, it's like you just said - knowing which tests are being excluded when *visually* or *manually* checking will become harder.
For me, this boils down to the question of where we should include information that tells the framework about how to run tests. This needs to be extensible so we can cover a variety of cases and extend the flexibility of the framework without rewriting it. Having one big file with all the framework semantics for the individual tests called out by the name of the file the module is from is _not_ extensible. This made sense for exclusions when we only called out tests by the script filename, and we are still doing this in rt.py, but the other ways we run collect and run tests think in terms of test modules, not test files.
I thought the current test framework already included this information within FunctionalTestSuite. This single file method can easily be extended to include recorded tests and would also have the benefit of being a single place to run both Functional and recorded tests. After all, are not recorded tests just another functional test?
Currently we are running tests by filename - is there a plan to have it run some other way?
If we are no longer running recorded tests by filename then how will they be run by tinderbox? How will developers use the existing --recordedTest command line parameter to try and reproduce a test failure?
I'm guessing that I'm missing information as to why recorded tests cannot use the same framework that the current functional tests use (with appropriate changes for the setup/teardown needs).
--- Bear Build and Release Engineer Open Source Applications Foundation (OSAF) [EMAIL PROTECTED] http://www.osafoundation.org [EMAIL PROTECTED] http://code-bear.com PGP Fingerprint = 9996 719F 973D B11B E111 D770 9331 E822 40B3 CD29
PGP.sig
Description: This is a digitally signed message part
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Open Source Applications Foundation "chandler-dev" mailing list http://lists.osafoundation.org/mailman/listinfo/chandler-dev
