>> Maybe "@serializeTest"?
>
> That's worth a try.

I have a slight worry that this approach could mask real concurrency
issues in LLDB itself; if we know that the issues are only in the test
code it seems more reasonable.  Even in that case though tests could
run concurrently on the same machine, e.g. a buildbot host that builds
and tests multiple configurations.

> In the event that your idea works out to eliminate those failures, there is 
> another element we can consider.  We can have Python give us the # cores 
> available (if there's something portable to do that), and when not otherwise 
> specified, pick a reasonable default # threads to get decent test run 
> performance without needing the environment  variable specified.  This would 
> give anybody running the tests a decent test run speedup without having to 
> read docs on configuring the environment variable.  (I think this was Steve 
> Pucci's idea but definitely something we discussed.)

I was going to make that suggestion too.  Python does have a portable
way to get the core count:

>>> import multiprocessing
>>> multiprocessing.cpu_count()
8

Anyhow, explicitly serializing the tests that intermittently fail
would be no worse than today, and worth it to improve the cycle time.

_______________________________________________
lldb-dev mailing list
[email protected]
http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev

Reply via email to