On Thu, Oct 23, 2014 at 7:46 AM, Bruno Oliveira <[email protected]> wrote:
> Hi all, > > On Thu, Oct 23, 2014 at 11:48 AM, holger krekel <[email protected]> > wrote: > > Bruno Oliveira did a PR that satisfies his company's particular > requirements, in that GUI tests must run after all others and pinned to a > particular note. > > To describe my requirements in a little more detail: > > We have a lot of GUI tests and most (say, 95%) work in parallel without > any problems. > > A few of them though deal with window input focus, and those tests > specifically can't run in parallel with other tests. For example, a test > might: > > 1. create a widget and display it > 2. edit a gui control, obtaining input focus > 3. lose edit focus on purpose during the test, asserting that some other > behavior occurs in response to that > > When this type of test runs in parallel with others, sometimes the other > tests will pop up their own widgets and the first test will lose input > focus at an unexpected place, causing it to fail. > > Other tests take screenshots of 3d rendering windows for regression > testing against known "good" screenshots, and those are also susceptible to > the same problem, where a window poping up in front of another before a > screenshot might change the image and fail the test. > > In summary, some of our tests can't be run in parallel with any other > tests. > > The current workaround is that we use a special mark in those tests that > can't run in parallel and run pytest twice: a session in parallel excluding > marked tests, and another regular, non-parallel session with only the > marked tests. This works, but it is error prone when developers run the > tests from the command line and makes error report cumbersome to read (the > first problem is alleviated a bit with a custom script). > > Others have discussed other use cases in this thread: > https://bitbucket.org/hpk42/pytest/issue/175. > > My PR takes care to distribute marked tests to a single node after all > other tests have executed, but after some discussion it is clear that there > might be more use cases out there so we would like to hear them before > deciding what would be the best interface options. > > Cheers, > > On Thu, Oct 23, 2014 at 11:48 AM, holger krekel <[email protected]> > wrote: > >> Hi all, >> >> Currently, pytest-xdist in load-balancing ("-n4") does not support >> grouping of tests or influencing how tests are distributed to nodes. >> Bruno Oliveira did a PR that satisfies his company's particular >> requirements, >> in that GUI tests must run after all others and pinned to a particular >> note. >> (https://bitbucket.org/hpk42/pytest-xdist/pull-request/11/) >> >> My question is what needs do you have wrt to test distribution with xdist? >> I'd like to collect and write down some user stories before we design >> options/mechanisms and then possibly get to implement them. Your input >> is much appreciated. >> >> best and thanks, >> holger >> _______________________________________________ >> pytest-dev mailing list >> [email protected] >> https://mail.python.org/mailman/listinfo/pytest-dev >> > > > _______________________________________________ > pytest-dev mailing list > [email protected] > https://mail.python.org/mailman/listinfo/pytest-dev > > I think it would be useful to pin or exclude marks from individual nodes. We have marks like requires_abc, and right now we have logic that tests if the requirement exists before running tests and marks them as skip if it is not met, but I know beforehand which requirement can run on which nodes and would like to not even send the tests that cannot run to a node.
_______________________________________________ pytest-dev mailing list [email protected] https://mail.python.org/mailman/listinfo/pytest-dev
