Hi Karen,

On 8/11/11 2:00 PM, Karen Tung wrote:
Hi Drew,

Good work, this will indeed make running tests much easier.
I have a couple comments/suggestions.

You define TEST_MAP in line 48-76 of slim_regression_test.py to
have the list of tests to run.  This means if we have new tests,
we need to update this file.  Furthermore, as a user, I will have
to remember what are the names of tests available, or I have to read
the code in slim_regression_test.py to find out.  Perhaps we
can do something like the following to make it more
user friendly and make future maintenance easier.

Yes, we'll have to update the file, but we have to update tests.nose as things stand right now ... more on this lower.


- Add an option to the slim_regression_test.py script to list
all the tests available and what "keywords" correspond to which tests.

You can simply run the script with the --help argument:

[mox:tests] > sudo ./slim_regression_test.py --help
Usage:  slim_regression_test.py [options] [test[,test]...]
Available tests:

       group tests
               all:  libraries, commands
libraries: boot, common, dataobject, engine, ict, logging, manifest_input, manifest, target, transfer
          commands:  ai, dc, installadm, js2ai, system-config, text-install

  individual tests
                ai:  cmd/auto-install/test/
              boot:  lib/install_boot/test
            common:  lib/install_common/test
        dataobject:  lib/install_doc/test
                dc:  cmd/distro_const/checkpoints/test
            engine:  lib/install_engine/test
               ict:  lib/install_ict/test
        installadm:  cmd/installadm/test
             js2ai:  cmd/js2ai/modules/test
           logging:  lib/install_logging_pymod/test
          manifest:  lib/install_manifest/test
    manifest_input:  lib/install_manifest_input/test
     system-config:  cmd/system-config/test, cmd/system-config/profile/test
            target:  lib/install_target/test
      text-install:  cmd/text-install/test
          transfer:  lib/install_transfer/test


Options:
  -h, --help            show this help message and exit
  -c CONFIG, --config=CONFIG
                        nose configuration file to use
  --suppress-results    suppress the printing of the results
  --hudson=HUDSON_NUM   hudson job number to use as a baseline

To get the list of all the tests.



- Instead of using a static list, generate the list dynamically.
Of course, doing this will require that we define a convention on
how tests are defined in the slim_source gate.  Perhaps we
can do something like the following.
In each of the test sub-directories,we define a text file with a
certain name.  A directory with this file
is considered a test directory, and will get automatically added to
the list of tests to be executed.  In this file, we can put information
about the "test suite" in the directory, such as test suite name. The test
suite name will the short name that people can use when they want to
run a certain test suite.


I wouldn't have any issues with reorganizing some of our gate to make the test directories adhere to some standards, but I'm not sure we want to make those changes at this point. If we do, I would do things like:

- move the test directory to the root of the library/command (DC's are in $SRC/cmd/distro_const/checkpoints/test instead of $SRC/cmd/distro_const/test) - combine multiple test directories into one test directory. AI and system-config have multiple directories of tests

Maybe from there, we could try to automate "discovery" of tests, but I don't know.

Thanks for looking Karen!

-Drew



Thanks,

--Karen

On 08/11/11 08:53, Drew Fisher wrote:
Good morning!

I was hoping to get a quick code review for

6987307 <http://monaco.us.oracle.com/detail.jsf?cr=6987307> Update slim_test to allow better granularity of test selection

https://cr.opensolaris.org/action/browse/caiman/drewfish/6987307/webrev/

This impacts none of our packaged code, so I figured it was safe to send out even though we're in a restricted build phase.

I added a new test script which allows two new things:
- the ability to specify subsets of tests to run rather than running every single test in the gate
- regression testing against prior Hudson results.

The regression testing isn't the smartest algorithm so take that testing with a grain of salt. By default, the new script looks at the latest Hudson install_unit_tests job and compares against that. If somebody pushes something that breaks 50 tests, that will be used as the baseline. I added a flag to the script which allows you to specify which Hudson job you want to compare against should something like that happen.

The change to tests.nose was to re-include the /lib/install_ict/test directory. It went missing at some point ...

Thanks!

-Drew


_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

Reply via email to