one solution is to simply group the "exotic" tests separately from the
main tests, so they can be run optionally when you are in that exotic
configuration.
You can do this in several ways, including a naming convention, or
another parallel code tree of the tests...
I like the latter, as it makes it easier to "see"
geir
Mikhail Loenko wrote:
Well let's start a new thread as this is more general problem.
So if we have some code designed for some exotic configurations
and we have tests that verify that exotic code.
The test when run in usual configuration (not exotic one) should
report something that would not scare people. But if one
wants to test that specific exotic configuration that he should be
able to easily verify that he successfully made required conf and
the test worked well.
The following options I see here:
1) introduce a new test status (like skipped) to mark those tests that
did not actually run
2) agree on exact wording that the skipped tests would print to allow
grep logs later
3) introduce tests-indicators that would fail when current
configuration disallow
running certain tests
Please let me know what you think
Thanks,
Mikhail Loenko
Intel Middleware Products Division