Martin,

Here's some examples:
(note, I put in artificial failures into test_cpio.py for testing)

Examples:

[mox:tests] > sudo ./slim_regression_test.py transfer ai
#600 Test uninstall of excluded directories succeeds ... ok
#601 Test that an error is raised for invalid dir_excl_list file ... ok
#602 Test transfer of directories from a list succeeds ... ok
#603 Test that an error is raised for invalid dir_list file ... ok
#604 Test that an error is raised when dst is not specified ... ok
#605 Test transfer of directories succeeds ... ok
<snip>
----------------------------------------------------------------------
Ran 229 tests in 23.715s

FAILED (errors=3, failures=2)

New Regressions:
test_software_type (test_cpio.TestCPIOFunctions)


Tests Now Passing:
Test Success In Calc of Swap and Dump Size
Test installation with a manifest that fails to parse.


#######
(silent version)

[mox:tests] > sudo ./slim_regression_test.py --suppress-results transfer ai
New Regressions:
test_software_type (test_cpio.TestCPIOFunctions)


Tests Now Passing:
Test Success In Calc of Swap and Dump Size
Test installation with a manifest that fails to parse.

#######
(comparing against a different Hudson job)

[mox:tests] > sudo ./slim_regregression_test.py --suppress-results --hudson 525 ai

Tests Now Passing:
------------------
Test CRO Success if 2 disks, whole_disk=True & root pool
Test Success if boot_disk target in manifest
Test Success If 2 Disks w/Whole-Disk = True & Root Vdev Specified
Test Success If Have 3 Disks in a Data Pool with RAIDZ2
Test Success If Have 2 Disks in the Root Pool and 2 Disks spare
Test Success If Root Pool with BE Datasets Specified
Test Success If Pool and Datasets Options Specified
Test Success if 2 disks, whole_disk=True & root pool
Test Success If Have 2 Disks in a Data Pool and 2 Disks log-mirror
Test Success If 2 Disks, Mixed Whole-Disk Values & Root Pool
Test Success If Have 1 Disk in the Root Pool, Data Pool with 2 Disks
Test Success If Have 2 Disks in a Root Pool with hot-spare and cache
Test CRO Success if dev_chassis attribute specified
Test CRO Success if 1 whole & 1 partitioned disk, no logicals
Test Success If 2 Disks, Mixed Whole-Disk Values & Root w/Vdev
Test installation with a manifest that fails to parse.
Test Success if 1 whole & 1 partitioned disk, no logicals
Test Success If Have 2 Disks in a Data Pool with hot-spare and log
Test CRO Success if boot_disk target in manifest
Test Success If Root Pool Non-BE Datasets Specified
Test Success If Root Pool With BE Specified



-Drew

On 8/11/11 6:32 PM, Martin Widjaja wrote:
Thanks Drew. This is really awesome! Finally we can get some kind of comparison going on, just makes our lives a lot easier.

Question: Do you have some sample output from the test run? How are the diffs/sucs represented?

Thanks,
Martin

On 8/11/2011 3:06 PM, Drew Fisher wrote:
Round 2:

https://cr.opensolaris.org/action/browse/caiman/drewfish/6987307_2/webrev/

I've incorporated Karen's request to make the test_map dynamic. We now use os.walk() to traverse the source gate to find /test directories. I've also added John's fixes as well.

Here's the output from --help:


[mox:tests] > sudo ./slim_regression_test.py --help
Usage:  slim_regression_test.py [options] [test[,test]...]
Available tests:

              group tests
                      all:  libraries, commands
libraries: install_target, install_utils, install_doc, install_logging_pymod, netif, liberrsvc, libict_pymod, install_ict, terminalui, liberrsvc_pymod, install_boot, install_engine, install_manifest_input, install_logging, install_common, install_manifest, install_transfer, libaimdns commands: distro_const/checkpoints, js2ai/modules, ai-webserver, system-config, system-config/profile, auto-install/checkpoints, auto-install, installadm, text-install

         individual tests
             ai-webserver:  cmd/ai-webserver/test
             auto-install:  cmd/auto-install/test
 auto-install/checkpoints:  cmd/auto-install/checkpoints/test
 distro_const/checkpoints:  cmd/distro_const/checkpoints/test
             install_boot:  lib/install_boot/test
           install_common:  lib/install_common/test
              install_doc:  lib/install_doc/test
           install_engine:  lib/install_engine/test
              install_ict:  lib/install_ict/test
          install_logging:  lib/install_logging/test
    install_logging_pymod:  lib/install_logging_pymod/test
         install_manifest:  lib/install_manifest/test
   install_manifest_input:  lib/install_manifest_input/test
           install_target:  lib/install_target/test
         install_transfer:  lib/install_transfer/test
            install_utils:  lib/install_utils/test
               installadm:  cmd/installadm/test
            js2ai/modules:  cmd/js2ai/modules/test
                libaimdns:  lib/libaimdns/test
                liberrsvc:  lib/liberrsvc/test
          liberrsvc_pymod:  lib/liberrsvc_pymod/test
             libict_pymod:  lib/libict_pymod/test
                    netif:  lib/netif/test
            system-config:  cmd/system-config/test
    system-config/profile:  cmd/system-config/profile/test
               terminalui:  lib/terminalui/test
             text-install:  cmd/text-install/test


Options:
  -h, --help            show this help message and exit
  -c CONFIG, --config=CONFIG
                        nose configuration file to use
  --suppress-results    suppress the printing of the results
  --hudson=HUDSON_NUM   hudson job number to use as a baseline


Some of the aliases are longer, but that can't be helped.


-Drew




On 8/11/11 9:53 AM, Drew Fisher wrote:
Good morning!

I was hoping to get a quick code review for

6987307 <http://monaco.us.oracle.com/detail.jsf?cr=6987307> Update slim_test to allow better granularity of test selection

https://cr.opensolaris.org/action/browse/caiman/drewfish/6987307/webrev/

This impacts none of our packaged code, so I figured it was safe to send out even though we're in a restricted build phase.

I added a new test script which allows two new things:
- the ability to specify subsets of tests to run rather than running every single test in the gate
- regression testing against prior Hudson results.

The regression testing isn't the smartest algorithm so take that testing with a grain of salt. By default, the new script looks at the latest Hudson install_unit_tests job and compares against that. If somebody pushes something that breaks 50 tests, that will be used as the baseline. I added a flag to the script which allows you to specify which Hudson job you want to compare against should something like that happen.

The change to tests.nose was to re-include the /lib/install_ict/test directory. It went missing at some point ...

Thanks!

-Drew


_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss


_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

Reply via email to