Hello all,

I think it would be lovely if trial caught up to the last decade of
advances in coverage measurement technology.  I *think* this means
integrating with coverage.py <https://pypi.org/project/coverage> - probably
the hands-down leader in Python coverage technology for at least the last
10 years, if not more - instead of the stdlib "trace" module which is ...
something else.  Or maybe there's an even better option out there somewhere
- it would be amazing if all of the trial-based test suites out there got
*whatever* the best current option is - why should ever project have to
figure this out for itself?

When was the last time anyone ran trial --coverage on purpose?  Did they
realize they were choosing the bad option?

I know that you can hack around this situation roughly like this:

python -m coverage run -m twisted.trial ...

but this has some shortcomings.

   1. If trial --coverage exists shouldn't it be the *good* option?
   2. python -m coverage run -m twisted.trial -jN ... is a bad time.  How
   about some coverage measurement that's multi-core friendly?  It's a
   *real* drag going from a 30 second no-coverage test run using 16 cores
   to a 15 minute coverage-measuring run on a single core.

Does anyone agree that this is something short of an ideal situation?  Is
anyone interested in helping address it?

Jean-Paul
_______________________________________________
Twisted mailing list -- twisted@python.org
To unsubscribe send an email to twisted-le...@python.org
https://mail.python.org/mailman3/lists/twisted.python.org/
Message archived at 
https://mail.python.org/archives/list/twisted@python.org/message/RZPYPIJ5UK4CZWGO6O4WLNPMWG76O7GF/
Code of Conduct: https://twisted.org/conduct

Reply via email to