Re: [cmake-developers] [CMake] Setup/tear down steps for CTest
I've implemented the change to not run regular tests if they require a fixture with a setup test that fails. The merge request has been reopened and updated. Hopefully this can go through before the feature freeze for 3.7. ;) On Mon, Sep 12, 2016 at 11:09 PM, Brad King wrote: > On 09/10/2016 11:34 AM, Craig Scott wrote: > > have a crack at adding the required functionality to my fixtures branch > > so that regular tests are skipped if setup tests fail > [snip] > > limit the new functionality just to fixtures where the required > > behaviour is well defined. > > Sounds good to me. Please re-open and extend the associated MR when > ready. > > Thanks, > -Brad > > -- Craig Scott Melbourne, Australia https://crascit.com -- Powered by www.kitware.com Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ Kitware offers various services to support the CMake community. For more information on each offering, please visit: CMake Support: http://cmake.org/cmake/help/support.html CMake Consulting: http://cmake.org/cmake/help/consulting.html CMake Training Courses: http://cmake.org/cmake/help/training.html Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/cmake-developers
Re: [cmake-developers] [CMake] Setup/tear down steps for CTest
On 09/10/2016 11:34 AM, Craig Scott wrote: > have a crack at adding the required functionality to my fixtures branch > so that regular tests are skipped if setup tests fail [snip] > limit the new functionality just to fixtures where the required > behaviour is well defined. Sounds good to me. Please re-open and extend the associated MR when ready. Thanks, -Brad -- Powered by www.kitware.com Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ Kitware offers various services to support the CMake community. For more information on each offering, please visit: CMake Support: http://cmake.org/cmake/help/support.html CMake Consulting: http://cmake.org/cmake/help/consulting.html CMake Training Courses: http://cmake.org/cmake/help/training.html Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/cmake-developers
Re: [cmake-developers] [CMake] Setup/tear down steps for CTest
On Fri, Sep 9, 2016 at 5:14 AM, Daniel Pfeifer wrote: > On Thu, Sep 8, 2016 at 5:52 PM, Brad King wrote: > > I think if we introduce the notion of tests requiring other tests > > then a new model of test selection and enablement needs to be > > designed. Some kind of test DAG could be defined with various > > roots and subgraphs being selectable an causing all reachable > > tests to be included. > > This could be expanded even further. If "tests requiring other tests" > is generalized to "tests requiring X", wouldn't this allow incremental > testing? Say you change one file in your project. You rebuild only > the parts of the project that are affected by this change. Then you > rerun only the tests that are affected by the change. This really has > to be carefully thought out. > Interesting idea, but yes potentially a big body of work. That said, the fixtures implementation up for review already gets you most of the way there I think. One could potentially define a fixture that covers the tests for functionality a developer is working on. A test is able to require multiple fixtures, so developers could define fixtures to whatever granularity they wanted. We could add option(s) to the ctest command line which allow selection of tests based on fixture names in the same way we currently do with labels and test names. This would allow developers to run just the set of tests related to a particular functionality. One could argue that labels already get you most of the way there now too, but since fixtures would come with clean handing of setup/cleanup dependencies, they may be more useful to developers. They also feel a bit closer to the implementation than what labels are likely to cover and so fixtures might be a better fit for the scenario you mentioned. I'd also be skeptical whether we could make it easy/convenient/robust for CMake to work out which tests to re-run based on which files had to be rebuilt. Asking developers to pick which fixture(s) to retest instead might be a good compromise due to the simplicity of both use and implementation. Even aside from whether the above would satisfy the proposed scenario, maybe it's worth considering adding test selection based on fixture names regardless. I think this probably should be a separate branch that follows from the current one though so it doesn't turn into a monster of a merge. ;) I'd be happy to look at that idea once the current branch is done and in, since I have a concrete use case driving me to get that one completed first. -- Craig Scott Melbourne, Australia http://crascit.com -- Powered by www.kitware.com Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ Kitware offers various services to support the CMake community. For more information on each offering, please visit: CMake Support: http://cmake.org/cmake/help/support.html CMake Consulting: http://cmake.org/cmake/help/consulting.html CMake Training Courses: http://cmake.org/cmake/help/training.html Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/cmake-developers
Re: [cmake-developers] [CMake] Setup/tear down steps for CTest
On Thu, Sep 8, 2016 at 11:52 PM, Brad King wrote: > On 09/08/2016 10:15 AM, Craig Scott wrote: > > adding a DEPENDS_ON_SUCCESS test property or something similar > > which would implement the perhaps more intuitive behaviour of not > > running dependent tests when a dependee fails. If that was done, > > then implementing the "don't run fixture tests if any fixture > > setup fails" logic would be trivial. > > The semantics of this will have to be carefully though out, in > particular with respect to enabling test dependencies. Right now > ctest arguments like -E can exclude tests. What if those are > dependencies of included tests? > > I think if we introduce the notion of tests requiring other tests > then a new model of test selection and enablement needs to be > designed. Some kind of test DAG could be defined with various > roots and subgraphs being selectable an causing all reachable > tests to be included. > While I can see potential merit, I'd be reluctant to go so far as adding the complexity of a test DAG. One of the attractive things about the current functionality is its simplicity. It's relatively easy for new developers to learn how to use it and I'm keen to preserve that as much as possible, since it helps with adoption of CTest and CMake in general. Indeed, in the more general case, if we added a DEPENDS_ON_SUCCESS test property then we would have to work through how to handle situations where dependee tests were not in the initial test set to be executed. I think this might be less clear cut than I initially thought, so I'm tending to back away from this as a general feature now. For the case of test fixtures though, the semantics are very clear and would be well defined, since one of the primary purposes of fixtures is to bring in setup and cleanup tests which might not have been part of the initial test set. If no-one objects, I'll have a crack at adding the required functionality to my fixtures branch so that regular tests are skipped if setup tests fail (i.e. as per the latest set of requirements that were proposed on this list a few days ago) and I'll do so without adding a new DEPENDS_ON_SUCCESS test property. That will limit the new functionality just to fixtures where the required behaviour is well defined. -- Craig Scott Melbourne, Australia http://crascit.com -- Powered by www.kitware.com Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ Kitware offers various services to support the CMake community. For more information on each offering, please visit: CMake Support: http://cmake.org/cmake/help/support.html CMake Consulting: http://cmake.org/cmake/help/consulting.html CMake Training Courses: http://cmake.org/cmake/help/training.html Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/cmake-developers
Re: [cmake-developers] [CMake] Setup/tear down steps for CTest
On Thu, Sep 8, 2016 at 5:52 PM, Brad King wrote: > On 09/08/2016 10:15 AM, Craig Scott wrote: >> the current behaviour of DEPENDS. At the moment, if test B depends >> on test A, test B still executes if test A fails. >> It is unexpected because I'd initially have thought of DEPENDS as >> meaning I can't run test B if test A fails, after all, B depends >> on A which I'd interpret to mean if A fails, then something B requires >> isn't working. Conversely, this is also useful because until now, >> DEPENDS was the only way to get cleanup functionality to run after >> other tests, and if those other tests fail, we still want the >> cleanup to occur. > > At one time we only had serial testing so the order of tests was > fully controllable and based on the order of addition. There were > never any conditions for whether a test would run based on results > of other tests. Then when parallel testing was added we need a > way to restore *order* dependencies, so DEPENDS was added just > for that. Maybe a better name would have been RUN_AFTER. > >> adding a DEPENDS_ON_SUCCESS test property or something similar >> which would implement the perhaps more intuitive behaviour of not >> running dependent tests when a dependee fails. If that was done, >> then implementing the "don't run fixture tests if any fixture >> setup fails" logic would be trivial. > > The semantics of this will have to be carefully though out, in > particular with respect to enabling test dependencies. Right now > ctest arguments like -E can exclude tests. What if those are > dependencies of included tests? > > I think if we introduce the notion of tests requiring other tests > then a new model of test selection and enablement needs to be > designed. Some kind of test DAG could be defined with various > roots and subgraphs being selectable an causing all reachable > tests to be included. This could be expanded even further. If "tests requiring other tests" is generalized to "tests requiring X", wouldn't this allow incremental testing? Say you change one file in your project. You rebuild only the parts of the project that are affected by this change. Then you rerun only the tests that are affected by the change. This really has to be carefully thought out. Cheers, Daniel -- Powered by www.kitware.com Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ Kitware offers various services to support the CMake community. For more information on each offering, please visit: CMake Support: http://cmake.org/cmake/help/support.html CMake Consulting: http://cmake.org/cmake/help/consulting.html CMake Training Courses: http://cmake.org/cmake/help/training.html Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/cmake-developers
Re: [cmake-developers] [CMake] Setup/tear down steps for CTest
On 09/08/2016 10:15 AM, Craig Scott wrote: > the current behaviour of DEPENDS. At the moment, if test B depends > on test A, test B still executes if test A fails. > It is unexpected because I'd initially have thought of DEPENDS as > meaning I can't run test B if test A fails, after all, B depends > on A which I'd interpret to mean if A fails, then something B requires > isn't working. Conversely, this is also useful because until now, > DEPENDS was the only way to get cleanup functionality to run after > other tests, and if those other tests fail, we still want the > cleanup to occur. At one time we only had serial testing so the order of tests was fully controllable and based on the order of addition. There were never any conditions for whether a test would run based on results of other tests. Then when parallel testing was added we need a way to restore *order* dependencies, so DEPENDS was added just for that. Maybe a better name would have been RUN_AFTER. > adding a DEPENDS_ON_SUCCESS test property or something similar > which would implement the perhaps more intuitive behaviour of not > running dependent tests when a dependee fails. If that was done, > then implementing the "don't run fixture tests if any fixture > setup fails" logic would be trivial. The semantics of this will have to be carefully though out, in particular with respect to enabling test dependencies. Right now ctest arguments like -E can exclude tests. What if those are dependencies of included tests? I think if we introduce the notion of tests requiring other tests then a new model of test selection and enablement needs to be designed. Some kind of test DAG could be defined with various roots and subgraphs being selectable an causing all reachable tests to be included. -Brad -- Powered by www.kitware.com Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ Kitware offers various services to support the CMake community. For more information on each offering, please visit: CMake Support: http://cmake.org/cmake/help/support.html CMake Consulting: http://cmake.org/cmake/help/consulting.html CMake Training Courses: http://cmake.org/cmake/help/training.html Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/cmake-developers
Re: [cmake-developers] [CMake] Setup/tear down steps for CTest
I should also point out that another reason for not implementing the "skipping tests if the setup fails logic" relates to the current behaviour of DEPENDS. At the moment, if test B depends on test A, test B still executes if test A fails. This is both useful and unexpected at the same time. It is unexpected because I'd initially have thought of DEPENDS as meaning I can't run test B if test A fails, after all, B depends on A which I'd interpret to mean if A fails, then something B requires isn't working. Conversely, this is also useful because until now, DEPENDS was the only way to get cleanup functionality to run after other tests, and if those other tests fail, we still want the cleanup to occur. Current behaviour of DEPENDS can't change because there would be too much out there in the wild relying on the existing behaviour. I'm wondering if there's merit in adding a DEPENDS_ON_SUCCESS test property or something similar which would implement the perhaps more intuitive behaviour of not running dependent tests when a dependee fails. If that was done, then implementing the "don't run fixture tests if any fixture setup fails" logic would be trivial. On Thu, Sep 8, 2016 at 6:08 PM, Craig Scott wrote: > Merge request implementing this feature is now up for review here: > > https://gitlab.kitware.com/cmake/cmake/merge_requests/88 > > I ended up going with FIXTURE_... test property names rather than > GROUP_... since it seemed more specific. I have not implemented the logic > for skipping regular tests if any of a fixture's setup tests fail as that > would require more change than I wanted to bite off for this initial > implementation. If it is really required, I guess it could be done, but my > primary concern first is not to introduce new bugs. ;) > > > > On Thu, Sep 1, 2016 at 9:17 AM, Craig Scott > wrote: > >> Actually, we can't really re-use the RESOURCE_LOCK for the proposed >> RESOURCE_SETUP and RESOURCE_CLEANUP functionality since that would force >> all the tests using that resource to be serialised. So yes, a separate >> GROUP or something similar would seem to be needed. Let me amend my earlier >> proposal (which is an evolution of Ben's) to something like this: >> >> >> add_test(NAME setup-foo ...) >> set_tests_properties(setup-foo PROPERTIES GROUP_SETUP foo) >> >> add_test(NAME cleanup-foo ...) >> set_tests_properties(cleanup-foo PROPERTIES GROUP_CLEANUP foo) >> >> add_test(NAME use-foo ...) >> set_tests_properties(use-foo PROPERTIES GROUP foo) >> >> >> The logic would be as follows: >> >>- Any test cases with a GROUP_SETUP property for a group will be run >>before any test cases with GROUP or GROUP_CLEANUP for that same group. The >>order of these setup test cases can be controlled with the existing >> DEPENDS >>test property. >>- If any of the group's setup test cases fail, all other test cases >>for that group will be skipped. All cleanup test cases for the group >>probably should still be run though (it could be hard to try to work out >>which cleanup tests should run, so maybe conservatively just run all of >>them). >>- If all setup test cases passed, then run all test cases for that >>group. Regardless of the success or failure of these test cases, once they >>are all completed, run all the cleanup test cases associated with the >> group. >>- Ordering of cleanup test cases can again be controlled with the >>existing DEPENDS test property. >> >> What the above buys us is that CTest then knows definitively that if it >> is asked to run a test case from a particular group, it must also run the >> setup and cleanup test cases associated with that group, regardless of >> whether those setup/cleanup test cases are in the set of test cases CTest >> was originally asked to run. At the moment, CTest could theoretically do >> that for the setup steps based on DEPENDS functionality, but not the >> cleanup. The above proposal is very clear about the nature of the >> dependency and gives the symmetry of both setup and cleanup behaviour. >> >> I'm not tied to the terminology of "GROUP" for tying a set of test cases >> to their setup/cleanup tasks, so I'm happy to consider alternatives. I'm >> also wondering whether simply GROUP for a test property is too generic for >> the test cases that require the setup/cleanup (as opposed to the test cases >> that ARE the setup/cleanup). >> >> >> On Thu, Sep 1, 2016 at 10:50 AM, Craig Scott >> wrote: >> >>> In my original thinking, I was of the view that if a setup/cleanup step >>> needed to be executed for each test rather than for the overall test run as >>> a whole, then perhaps the test itself should handle that rather than CMake. >>> The existing RESOURCE_LOCK functionality could then be used to prevent >>> multiple tests from running concurrently if they would interfere with each >>> other. Existing test frameworks like GoogleTest and Boost Test already have >>> good support for test fixtures which ma
Re: [cmake-developers] [CMake] Setup/tear down steps for CTest
Merge request implementing this feature is now up for review here: https://gitlab.kitware.com/cmake/cmake/merge_requests/88 I ended up going with FIXTURE_... test property names rather than GROUP_... since it seemed more specific. I have not implemented the logic for skipping regular tests if any of a fixture's setup tests fail as that would require more change than I wanted to bite off for this initial implementation. If it is really required, I guess it could be done, but my primary concern first is not to introduce new bugs. ;) On Thu, Sep 1, 2016 at 9:17 AM, Craig Scott wrote: > Actually, we can't really re-use the RESOURCE_LOCK for the proposed > RESOURCE_SETUP and RESOURCE_CLEANUP functionality since that would force > all the tests using that resource to be serialised. So yes, a separate > GROUP or something similar would seem to be needed. Let me amend my earlier > proposal (which is an evolution of Ben's) to something like this: > > > add_test(NAME setup-foo ...) > set_tests_properties(setup-foo PROPERTIES GROUP_SETUP foo) > > add_test(NAME cleanup-foo ...) > set_tests_properties(cleanup-foo PROPERTIES GROUP_CLEANUP foo) > > add_test(NAME use-foo ...) > set_tests_properties(use-foo PROPERTIES GROUP foo) > > > The logic would be as follows: > >- Any test cases with a GROUP_SETUP property for a group will be run >before any test cases with GROUP or GROUP_CLEANUP for that same group. The >order of these setup test cases can be controlled with the existing DEPENDS >test property. >- If any of the group's setup test cases fail, all other test cases >for that group will be skipped. All cleanup test cases for the group >probably should still be run though (it could be hard to try to work out >which cleanup tests should run, so maybe conservatively just run all of >them). >- If all setup test cases passed, then run all test cases for that >group. Regardless of the success or failure of these test cases, once they >are all completed, run all the cleanup test cases associated with the > group. >- Ordering of cleanup test cases can again be controlled with the >existing DEPENDS test property. > > What the above buys us is that CTest then knows definitively that if it is > asked to run a test case from a particular group, it must also run the > setup and cleanup test cases associated with that group, regardless of > whether those setup/cleanup test cases are in the set of test cases CTest > was originally asked to run. At the moment, CTest could theoretically do > that for the setup steps based on DEPENDS functionality, but not the > cleanup. The above proposal is very clear about the nature of the > dependency and gives the symmetry of both setup and cleanup behaviour. > > I'm not tied to the terminology of "GROUP" for tying a set of test cases > to their setup/cleanup tasks, so I'm happy to consider alternatives. I'm > also wondering whether simply GROUP for a test property is too generic for > the test cases that require the setup/cleanup (as opposed to the test cases > that ARE the setup/cleanup). > > > On Thu, Sep 1, 2016 at 10:50 AM, Craig Scott > wrote: > >> In my original thinking, I was of the view that if a setup/cleanup step >> needed to be executed for each test rather than for the overall test run as >> a whole, then perhaps the test itself should handle that rather than CMake. >> The existing RESOURCE_LOCK functionality could then be used to prevent >> multiple tests from running concurrently if they would interfere with each >> other. Existing test frameworks like GoogleTest and Boost Test already have >> good support for test fixtures which make doing this per-test setup/cleanup >> easy. The problem I want to solve is where a group of tests share a common >> (set of) setup/cleanup steps and CMake knows to run them when asked to run >> any test cases that require them. The specific problem motivating this work >> was running ctest --rerun-failed, where we need CMake to add in any >> setup/cleanup steps required by any of the tests that will be rerun. With >> that in mind, see further comments interspersed below. >> >> >> On Fri, Aug 26, 2016 at 12:08 AM, Ben Boeckel >> wrote: >> >>> On Tue, Aug 23, 2016 at 08:00:09 +0200, Rolf Eike Beer wrote: >>> > Am Dienstag, 23. August 2016, 10:06:01 schrieb Craig Scott: >>> > > So how would you want the feature to work? I'd suggest an initial >>> set of >>> > > requirements something like the following: >>> > > >>> > >- Need to support the ability to define multiple setup and/or >>> tear down >>> > >tasks. >>> > >- It should be possible to specify dependencies between setup >>> tasks and >>> > >between tear down tasks. >>> > >- Individual tests need to be able to indicate which setup and/or >>> tear >>> > >down tasks they require, similar to the way DEPENDS is used to >>> specify >>> > >dependencies between test cases. >>> > >- When using ctest --rerun-failed,
Re: [cmake-developers] [CMake] Setup/tear down steps for CTest
Actually, we can't really re-use the RESOURCE_LOCK for the proposed RESOURCE_SETUP and RESOURCE_CLEANUP functionality since that would force all the tests using that resource to be serialised. So yes, a separate GROUP or something similar would seem to be needed. Let me amend my earlier proposal (which is an evolution of Ben's) to something like this: add_test(NAME setup-foo ...) set_tests_properties(setup-foo PROPERTIES GROUP_SETUP foo) add_test(NAME cleanup-foo ...) set_tests_properties(cleanup-foo PROPERTIES GROUP_CLEANUP foo) add_test(NAME use-foo ...) set_tests_properties(use-foo PROPERTIES GROUP foo) The logic would be as follows: - Any test cases with a GROUP_SETUP property for a group will be run before any test cases with GROUP or GROUP_CLEANUP for that same group. The order of these setup test cases can be controlled with the existing DEPENDS test property. - If any of the group's setup test cases fail, all other test cases for that group will be skipped. All cleanup test cases for the group probably should still be run though (it could be hard to try to work out which cleanup tests should run, so maybe conservatively just run all of them). - If all setup test cases passed, then run all test cases for that group. Regardless of the success or failure of these test cases, once they are all completed, run all the cleanup test cases associated with the group. - Ordering of cleanup test cases can again be controlled with the existing DEPENDS test property. What the above buys us is that CTest then knows definitively that if it is asked to run a test case from a particular group, it must also run the setup and cleanup test cases associated with that group, regardless of whether those setup/cleanup test cases are in the set of test cases CTest was originally asked to run. At the moment, CTest could theoretically do that for the setup steps based on DEPENDS functionality, but not the cleanup. The above proposal is very clear about the nature of the dependency and gives the symmetry of both setup and cleanup behaviour. I'm not tied to the terminology of "GROUP" for tying a set of test cases to their setup/cleanup tasks, so I'm happy to consider alternatives. I'm also wondering whether simply GROUP for a test property is too generic for the test cases that require the setup/cleanup (as opposed to the test cases that ARE the setup/cleanup). On Thu, Sep 1, 2016 at 10:50 AM, Craig Scott wrote: > In my original thinking, I was of the view that if a setup/cleanup step > needed to be executed for each test rather than for the overall test run as > a whole, then perhaps the test itself should handle that rather than CMake. > The existing RESOURCE_LOCK functionality could then be used to prevent > multiple tests from running concurrently if they would interfere with each > other. Existing test frameworks like GoogleTest and Boost Test already have > good support for test fixtures which make doing this per-test setup/cleanup > easy. The problem I want to solve is where a group of tests share a common > (set of) setup/cleanup steps and CMake knows to run them when asked to run > any test cases that require them. The specific problem motivating this work > was running ctest --rerun-failed, where we need CMake to add in any > setup/cleanup steps required by any of the tests that will be rerun. With > that in mind, see further comments interspersed below. > > > On Fri, Aug 26, 2016 at 12:08 AM, Ben Boeckel > wrote: > >> On Tue, Aug 23, 2016 at 08:00:09 +0200, Rolf Eike Beer wrote: >> > Am Dienstag, 23. August 2016, 10:06:01 schrieb Craig Scott: >> > > So how would you want the feature to work? I'd suggest an initial set >> of >> > > requirements something like the following: >> > > >> > >- Need to support the ability to define multiple setup and/or tear >> down >> > >tasks. >> > >- It should be possible to specify dependencies between setup >> tasks and >> > >between tear down tasks. >> > >- Individual tests need to be able to indicate which setup and/or >> tear >> > >down tasks they require, similar to the way DEPENDS is used to >> specify >> > >dependencies between test cases. >> > >- When using ctest --rerun-failed, ctest should automatically >> invoke any >> > >setup or tear down tasks required by the test cases that will be >> re-run. >> > >- Setup or tear down tasks which reference executable targets >> should >> > >substitute the actual built executable just like how >> add_custom_command() >> > > does. >> > >> > -need a way to mark if 2 tests with the same setup/teardown can share >> those or >> > if they need to run for every of them >> >> Proposal: >> >> add_test(NAME setup-foo ...) >> set_tests_properties(setup-foo PROPERTIES >> SETUP_GROUP foo >> SETUP_STEP SETUP_PER_TEST) # Also SETUP_ONCE. >> add_test(NAME use-foo ...) >> set_tests_properties(use-foo PROPERTIES >> SETUP_GROUP
Re: [cmake-developers] [CMake] Setup/tear down steps for CTest
On Tue, Aug 23, 2016 at 4:00 PM, Rolf Eike Beer wrote: > Am Dienstag, 23. August 2016, 10:06:01 schrieb Craig Scott: > > Cheeky way to get me more involved in contributing, but okay, I'll bite. > ;) > > Switching discussion to the dev list. > > > > So how would you want the feature to work? I'd suggest an initial set of > > requirements something like the following: > > > >- Need to support the ability to define multiple setup and/or tear > down > >tasks. > >- It should be possible to specify dependencies between setup tasks > and > >between tear down tasks. > >- Individual tests need to be able to indicate which setup and/or tear > >down tasks they require, similar to the way DEPENDS is used to specify > >dependencies between test cases. > >- When using ctest --rerun-failed, ctest should automatically invoke > any > >setup or tear down tasks required by the test cases that will be > re-run. > >- Setup or tear down tasks which reference executable targets should > >substitute the actual built executable just like how > add_custom_command() > > does. > > -need a way to mark if 2 tests with the same setup/teardown can share > those or > if they need to run for every of them > -the default for each test is "no s/t", which means it can't be run with > any > of the above in parallel (especially for compatibillity)[1] > -need a way to tell if a test doesn't care about those > > 1) think of a database connector test: the test that will check what > happens > if no DB is present will fail if the setup step "start DB" was run, but not > the teardown > So maybe that requires being able to specify that tests for resource XXX and resource YYY cannot be executed concurrently. Maybe that's a separate change that could be made independent of this proposed improvement, since it would apply even for existing CMake functionality. I see the value, I'm just trying to sort out what is really needed from what is nice-to-have but could be done as a subsequent improvement later. > > > Some open questions: > > > >- Should setup and tear down tasks be defined in pairs, or should they > >completely independent (this would still require the ability to > specify a > > dependency of a tear down task on a setup task)? > > The test could be "shutdown daemon" or "delete files" so I would keep them > separated. They may be created by the same command, so they could be > batched > anyway. > Agreed, it seems clear now that keeping them separate is preferable. > > >- Should the setup and tear down tasks be defined by a new CTest/CMake > >command or extend an existing mechanism (e.g. add_custom_command())? > > Don't clutter existing commands more than needed. If it's something new, > then > create a new command (they could still share C++ code). If it's basically > the > same as anything existing at the end then use that. > See my other email reply just now. I think re-using the existing commands and concepts and adding the RESOURCE_SETUP and RESOURCE_CLEANUP test properties might be the most seamless from an end user perspective. I might change my mind once I dig into the CMake source code though. ;) > > >- If no test case has a dependency on a setup or tear down task, > should > >that task be skipped? Perhaps tasks need to have a flag which > indicates > >whether they always run or only if a test case depends on it. > > Keep backward compatibility. I.e. if I now add a new test with s/t, then > every > other test should still run (and succeed) as before. > Definitely. Existing projects should receive zero impact from any changes made. New functionality should be opt-in. > > >- What terminology to use? Things like GoogleTest use terms like test > >*fixtures* for this sort of thing. The terms setup and tear down are a > >bit imprecise and cumbersome, so we would probably need something > better > >than those. > >- Would it make sense for the ctest command line to support disabling > >setup and/or tear down steps? I can see some potential scenarios where > > this may be desirable, but maybe this is getting too ambitious for a > > starting set of requirements. > > IMHO that doesn't make sense. One could think about an option to disable > the > s/t merging, i.e. that they are really called alone for every test. > To reduce complexity, I'm gravitating that way too. If you define a setup/cleanup task, then why allow disabling it? If developers really want that, they could wrap defining the setup/cleanup definitions inside an if() block controlled by a CMake option or something similar. > > >- What should happen if a setup or tear down task fails? How would > >failure be detected? How would such failures impact things like a > CDash > >test report, etc.? > > Then the test fails, just the same as it now does when it can't find the > executable. > Seems sensible. -- Craig Scott Melbourne, Australia http://crascit.com -
Re: [cmake-developers] [CMake] Setup/tear down steps for CTest
In my original thinking, I was of the view that if a setup/cleanup step needed to be executed for each test rather than for the overall test run as a whole, then perhaps the test itself should handle that rather than CMake. The existing RESOURCE_LOCK functionality could then be used to prevent multiple tests from running concurrently if they would interfere with each other. Existing test frameworks like GoogleTest and Boost Test already have good support for test fixtures which make doing this per-test setup/cleanup easy. The problem I want to solve is where a group of tests share a common (set of) setup/cleanup steps and CMake knows to run them when asked to run any test cases that require them. The specific problem motivating this work was running ctest --rerun-failed, where we need CMake to add in any setup/cleanup steps required by any of the tests that will be rerun. With that in mind, see further comments interspersed below. On Fri, Aug 26, 2016 at 12:08 AM, Ben Boeckel wrote: > On Tue, Aug 23, 2016 at 08:00:09 +0200, Rolf Eike Beer wrote: > > Am Dienstag, 23. August 2016, 10:06:01 schrieb Craig Scott: > > > So how would you want the feature to work? I'd suggest an initial set > of > > > requirements something like the following: > > > > > >- Need to support the ability to define multiple setup and/or tear > down > > >tasks. > > >- It should be possible to specify dependencies between setup tasks > and > > >between tear down tasks. > > >- Individual tests need to be able to indicate which setup and/or > tear > > >down tasks they require, similar to the way DEPENDS is used to > specify > > >dependencies between test cases. > > >- When using ctest --rerun-failed, ctest should automatically > invoke any > > >setup or tear down tasks required by the test cases that will be > re-run. > > >- Setup or tear down tasks which reference executable targets should > > >substitute the actual built executable just like how > add_custom_command() > > > does. > > > > -need a way to mark if 2 tests with the same setup/teardown can share > those or > > if they need to run for every of them > > Proposal: > > add_test(NAME setup-foo ...) > set_tests_properties(setup-foo PROPERTIES > SETUP_GROUP foo > SETUP_STEP SETUP_PER_TEST) # Also SETUP_ONCE. > add_test(NAME use-foo ...) > set_tests_properties(use-foo PROPERTIES > SETUP_GROUP foo) # implicit depends on all SETUP_GROUP foo / > SETUP_STEP SETUP_* tests. > add_test(NAME use-foo2 ...) > set_tests_properties(use-foo2 PROPERTIES > SETUP_GROUP foo) > add_test(NAME teardown-foo2 ...) > set_tests_properties(teardown-foo2 PROPERTIES > SETUP_GROUP foo > SETUP_STEP TEARDOWN) # implicit depends on all non-TEARDOWN steps > > Multiple setup/teardown steps could be done with DEPENDS between them. > I like the idea of tests being associated with a group and the group itself is where the setup/cleanup steps are attached/associated. That said, it would seem that RESOURCE_LOCK already more or less satisfies this concept. I'm wondering if we can't just somehow attach setup/cleanup steps to the named resource instead. That would be a more seamless evolution of the existing functionality and have little impact on any existing code. Basically all we'd need to do is add the ability to associate the setup/cleanup steps with a RESOURCE_LOCK label. It's still not clear to me whether the setup/cleanup tasks should be considered test cases themselves, but I can see benefits with taking that path. It would mean all we'd need is to be able to mark a test case as "this is a setup/cleanup step for RESOURCE_LOCK label XXX", maybe something like this: set_tests_properties(setup-foo PROPERTIES RESOURCE_SETUP foo) set_tests_properties(teardown-foo PROPERTIES RESOURCE_CLEANUP foo) If multiple setup/cleanup steps are defined for a particular resource, then dependencies between those test cases would determine their order and where there are no dependencies, the order would be undefined as is already the case for test cases. For the initial implementation at least, I think something like the SETUP_PER_TEST concept is more complexity than I'd want to tackle. Maybe it could be supported later, but in the first instance I think once per group/resource is already a significant win and worth focusing on at the start (see my motivation at the top of this email). > > > -the default for each test is "no s/t", which means it can't be run with > any > > of the above in parallel (especially for compatibillity)[1] > > -need a way to tell if a test doesn't care about those > > Making RESOURCE_LOCK a rwlock rather than a mutex might make sense here. > SETUP_STEP bits have a RESOURCE_LOCK_WRITE group_${group}, otherwise it > is RESOURCE_LOCK_READ group_${group}. > Not sure I follow what problem this solves and without a strong motivation, I'd be reluctant to add this sort of complexity to the existin
Re: [cmake-developers] [CMake] Setup/tear down steps for CTest
On Tue, Aug 23, 2016 at 08:00:09 +0200, Rolf Eike Beer wrote: > Am Dienstag, 23. August 2016, 10:06:01 schrieb Craig Scott: > > So how would you want the feature to work? I'd suggest an initial set of > > requirements something like the following: > > > >- Need to support the ability to define multiple setup and/or tear down > >tasks. > >- It should be possible to specify dependencies between setup tasks and > >between tear down tasks. > >- Individual tests need to be able to indicate which setup and/or tear > >down tasks they require, similar to the way DEPENDS is used to specify > >dependencies between test cases. > >- When using ctest --rerun-failed, ctest should automatically invoke any > >setup or tear down tasks required by the test cases that will be re-run. > >- Setup or tear down tasks which reference executable targets should > >substitute the actual built executable just like how add_custom_command() > > does. > > -need a way to mark if 2 tests with the same setup/teardown can share those > or > if they need to run for every of them Proposal: add_test(NAME setup-foo ...) set_tests_properties(setup-foo PROPERTIES SETUP_GROUP foo SETUP_STEP SETUP_PER_TEST) # Also SETUP_ONCE. add_test(NAME use-foo ...) set_tests_properties(use-foo PROPERTIES SETUP_GROUP foo) # implicit depends on all SETUP_GROUP foo / SETUP_STEP SETUP_* tests. add_test(NAME use-foo2 ...) set_tests_properties(use-foo2 PROPERTIES SETUP_GROUP foo) add_test(NAME teardown-foo2 ...) set_tests_properties(teardown-foo2 PROPERTIES SETUP_GROUP foo SETUP_STEP TEARDOWN) # implicit depends on all non-TEARDOWN steps Multiple setup/teardown steps could be done with DEPENDS between them. > -the default for each test is "no s/t", which means it can't be run with any > of the above in parallel (especially for compatibillity)[1] > -need a way to tell if a test doesn't care about those Making RESOURCE_LOCK a rwlock rather than a mutex might make sense here. SETUP_STEP bits have a RESOURCE_LOCK_WRITE group_${group}, otherwise it is RESOURCE_LOCK_READ group_${group}. > 1) think of a database connector test: the test that will check what happens > if no DB is present will fail if the setup step "start DB" was run, but not > the teardown RESOURCE_LOCK on that group_${group} can fix that I think. > > Some open questions: I agree with what Eike said. --Ben -- Powered by www.kitware.com Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ Kitware offers various services to support the CMake community. For more information on each offering, please visit: CMake Support: http://cmake.org/cmake/help/support.html CMake Consulting: http://cmake.org/cmake/help/consulting.html CMake Training Courses: http://cmake.org/cmake/help/training.html Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/cmake-developers
Re: [cmake-developers] [CMake] Setup/tear down steps for CTest
Am Dienstag, 23. August 2016, 10:06:01 schrieb Craig Scott: > Cheeky way to get me more involved in contributing, but okay, I'll bite. ;) > Switching discussion to the dev list. > > So how would you want the feature to work? I'd suggest an initial set of > requirements something like the following: > >- Need to support the ability to define multiple setup and/or tear down >tasks. >- It should be possible to specify dependencies between setup tasks and >between tear down tasks. >- Individual tests need to be able to indicate which setup and/or tear >down tasks they require, similar to the way DEPENDS is used to specify >dependencies between test cases. >- When using ctest --rerun-failed, ctest should automatically invoke any >setup or tear down tasks required by the test cases that will be re-run. >- Setup or tear down tasks which reference executable targets should >substitute the actual built executable just like how add_custom_command() > does. -need a way to mark if 2 tests with the same setup/teardown can share those or if they need to run for every of them -the default for each test is "no s/t", which means it can't be run with any of the above in parallel (especially for compatibillity)[1] -need a way to tell if a test doesn't care about those 1) think of a database connector test: the test that will check what happens if no DB is present will fail if the setup step "start DB" was run, but not the teardown > Some open questions: > >- Should setup and tear down tasks be defined in pairs, or should they >completely independent (this would still require the ability to specify a > dependency of a tear down task on a setup task)? The test could be "shutdown daemon" or "delete files" so I would keep them separated. They may be created by the same command, so they could be batched anyway. >- Should the setup and tear down tasks be defined by a new CTest/CMake >command or extend an existing mechanism (e.g. add_custom_command())? Don't clutter existing commands more than needed. If it's something new, then create a new command (they could still share C++ code). If it's basically the same as anything existing at the end then use that. >- If no test case has a dependency on a setup or tear down task, should >that task be skipped? Perhaps tasks need to have a flag which indicates >whether they always run or only if a test case depends on it. Keep backward compatibility. I.e. if I now add a new test with s/t, then every other test should still run (and succeed) as before. >- What terminology to use? Things like GoogleTest use terms like test >*fixtures* for this sort of thing. The terms setup and tear down are a >bit imprecise and cumbersome, so we would probably need something better >than those. >- Would it make sense for the ctest command line to support disabling >setup and/or tear down steps? I can see some potential scenarios where > this may be desirable, but maybe this is getting too ambitious for a > starting set of requirements. IMHO that doesn't make sense. One could think about an option to disable the s/t merging, i.e. that they are really called alone for every test. >- What should happen if a setup or tear down task fails? How would >failure be detected? How would such failures impact things like a CDash >test report, etc.? Then the test fails, just the same as it now does when it can't find the executable. Eike signature.asc Description: This is a digitally signed message part. -- Powered by www.kitware.com Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ Kitware offers various services to support the CMake community. For more information on each offering, please visit: CMake Support: http://cmake.org/cmake/help/support.html CMake Consulting: http://cmake.org/cmake/help/consulting.html CMake Training Courses: http://cmake.org/cmake/help/training.html Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Follow this link to subscribe/unsubscribe: http://public.kitware.com/mailman/listinfo/cmake-developers
Re: [cmake-developers] [CMake] Setup/tear down steps for CTest
Cheeky way to get me more involved in contributing, but okay, I'll bite. ;) Switching discussion to the dev list. So how would you want the feature to work? I'd suggest an initial set of requirements something like the following: - Need to support the ability to define multiple setup and/or tear down tasks. - It should be possible to specify dependencies between setup tasks and between tear down tasks. - Individual tests need to be able to indicate which setup and/or tear down tasks they require, similar to the way DEPENDS is used to specify dependencies between test cases. - When using ctest --rerun-failed, ctest should automatically invoke any setup or tear down tasks required by the test cases that will be re-run. - Setup or tear down tasks which reference executable targets should substitute the actual built executable just like how add_custom_command() does. Some open questions: - Should setup and tear down tasks be defined in pairs, or should they completely independent (this would still require the ability to specify a dependency of a tear down task on a setup task)? - Should the setup and tear down tasks be defined by a new CTest/CMake command or extend an existing mechanism (e.g. add_custom_command())? - If no test case has a dependency on a setup or tear down task, should that task be skipped? Perhaps tasks need to have a flag which indicates whether they always run or only if a test case depends on it. - What terminology to use? Things like GoogleTest use terms like test *fixtures* for this sort of thing. The terms setup and tear down are a bit imprecise and cumbersome, so we would probably need something better than those. - Would it make sense for the ctest command line to support disabling setup and/or tear down steps? I can see some potential scenarios where this may be desirable, but maybe this is getting too ambitious for a starting set of requirements. - What should happen if a setup or tear down task fails? How would failure be detected? How would such failures impact things like a CDash test report, etc.? I think that's probably enough to kick off discussions for now. On Sun, Aug 21, 2016 at 11:41 PM, David Cole wrote: > The best thing to do would be to add the feature to ctest, and > contribute to the CMake community. > > I, too, use the "run this test first" and "that test last" technique, > and set up DEPENDS property values to ensure ordering when all tests > are run in parallel. However, as you noted, this does not work to run > subsets of tests reliably. For me, I am able to live with the partial > solution because the vast majority of my tests can be run > independently anyhow and we usually do run all the tests, but a setup > / teardown step for the whole suite would be a welcome addition to > ctest. > > Looking forward to your patch... :-) > > > David C. > > > On Sat, Aug 20, 2016 at 8:32 PM, Craig Scott > wrote: > > Let's say a project defines a bunch of tests which require setup and tear > > down steps before/after all the tests are run (not each individual test, > I'm > > talking here about one setup before all tests are run and one tear down > > after all tests have finished). While this could be done by a script > driving > > CTest itself, it is less desirable since different platforms may need > > different driver scripts and this seems like something CTest should be > able > > to handle itself (if the setup/tear down steps use parts of the build, > that > > only strengthens the case to have them handled by CMake/CTest directly). > > > > It is possible to abuse the DEPENDS test property and define setup and > tear > > down "tests" which are not really tests but which perform the necessary > > steps. While this mostly works, it is not ideal and in particular it > doesn't > > work with CTest's --rerun-failed option. I'm wondering if there's > currently > > a better way of telling CMake/CTest about a setup step which must be run > > before some particular set of test cases and a tear down step after they > are > > all done. The tear down step needs to be performed regardless of whether > any > > of the real test cases pass or fail. > > > > The motivating case is to start up and shutdown a service which a (subset > > of) test cases need running. That service is expensive to set up and > hence > > it isn't feasible to start it up and shut it down for every test case > > individually. > > > > Any ideas? > > > > -- > > Craig Scott > > Melbourne, Australia > > http://crascit.com > > > > -- > > > > Powered by www.kitware.com > > > > Please keep messages on-topic and check the CMake FAQ at: > > http://www.cmake.org/Wiki/CMake_FAQ > > > > Kitware offers various services to support the CMake community. For more > > information on each offering, please visit: > > > > CMake Support: http://cmake.org/cmake/help/support.html > > CMake Consulting: http://cmake.org/cmake/help/consult