Hi, I've been pushing some changes to the unit tests on the cmake-link-interface-libraries branch which is merged into next.
Where the unit tests that I wrote were failing before in some cases, my changes make the dashboard go green again, but they don't actually make CMake any better. That branch expects that the LINK_INTERFACE_LIBRARIES feature of CMake actually has an effect. That is not the case on some platforms for various reasons. Now I test that the feature 'works', and then run the actual unit tests for my feature if it does. Now, if something breaks in the future causing my feature not to work anymore, the unit tests might not start to fail (they might not even be run anymore), so I would get no indication that something is wrong. That is what unit tests are supposed to provide for me. I've hit issues like this now on the generate_export_header work, and on this LINK_INTERFACE_LIBRARIES work. In the case of the generate_export_header work it is at least possible to run a try_compile test, and the result (and availability of the feature) is reported in the output of an invocation of cmake. In the case of LINK_INTERFACE_LIBRARIES, it might have no effect, and because it is implemented in C++ code, not a CMake module, I can't report its availability in the output of an invocation of cmake. Having so many old platforms and compilers on the dashboard going red with failure when I add unit tests is also quite demotivating for me, because anything on the dashboard going red is 'must fix' by definition. In some cases, having more platforms there has shown a 'real' problem in the code, but for sure not all. What I'd like to see is a distinction of feature support from platform support. In my case, I care about writing some features in cmake, but I don't care about watcom, GCC 3.3.1 etc. What I'd like to do is make sure my feature works on some 'reference platforms', which could be anything 'non- ancient', and fixing it on the ancient ones would become not-my-problem. The policy would then be that if unit tests that I write fail on other 'non- reference platforms', it wouldn't be my responsibility to fix it, and it wouldn't prevent the feature being merged to master, but it would fall to whoever cares about watcom, gcc 3.3.1 etc. So, I think I'm talking about two separate issues. One is whether failures on platforms I don't care about are my repsonsibility, and the other is ensuring that unit tests fail when something breaks. Rather than me running a try_compile to see if LINK_INTERFACE_LIBRARIES works, I would test for if (CMAKE_REFERENCE_PLATFORM). A reference platform is defined to be a platform on which the features of cmake are available. Then if LINK_INTERFACE_LIBRARIES fails on some reference platform, I would know about it. There would need to be some easy way of excluding platforms from being a reference platforms. Currently that is difficult because you have to look at failures and passes on the dashboard, figure out what the difference between them is, and find a way to exclude one without accidentally excluding the other. I don't know if there is a workable solution to this, so I'm wondering what others think: Is it workable to have a set of 'reference platforms' (which may or may not be communicated, or may just apply to the cmake cdash instance) on which unit tests can be expected to pass, and other platforms on which responsibility rests with the people who care about the platform? Thanks, Steve. -- Powered by www.kitware.com Visit other Kitware open-source projects at http://www.kitware.com/opensource/opensource.html Please keep messages on-topic and check the CMake FAQ at: http://www.cmake.org/Wiki/CMake_FAQ Follow this link to subscribe/unsubscribe: http://public.kitware.com/cgi-bin/mailman/listinfo/cmake-developers
