Hello, all. This is my first post to the mailing list, FWIW.

I've been a PyTest user for some years, but now I'm working on writing a
rather complex PyTest plugin to facilitate testing our service oriented
architecture at $work. The purpose of the plugin is to collect additional
tests (for which we are using `pytest_collection_modifyitems`) from a
specialized file format that PyTest wouldn't normally understand. I've got
it working beautifully, but I would love to hammer out some finer details
to really make it a clean-running plugin.

With that in mind, I'm looking for some suggestions / best practices /
advice here on how best to handle these situations:

Collection Errors
The author of one of these specialized tests, which we call "test plans,"
could possibly do a few things wrong that cause more than just making their
test fail. For example, they could get their file syntax incorrect, which
results in a parsing error. At the moment, I catch the parsing error (we're
using PyParsing to parse our test plans) and raise a different exception
about the problem (and I've tried AssertionError), but everything I do
results in a stack trace dumped with "INTERNALERROR>" and the immediate,
abnormal termination of the PyTest process. This contrasts to, for an
analog example, a syntax error in a regular Python test file, which instead
results in a "ERROR collecting" output, while all other unaffected tests
still run and the PyTest process terminates normally. What I'm looking for
here is the "right way" to make my plugin report an "ERROR collecting"
instead of causing an "INTERNALERROR>" when such a problem occurs
collecting these specialized tests.

Improving the Test Error Report
When tests have errors or failures, PyTest prints out a report of those
errors, and with each one includes an exhaustive stack trace. For our
specialized tests, with some clever manipulation of the test data added to
`items` in `pytest_collection_modifyitems`, I've managed to at least get it
to display a meaningful name for the test that failed so that it's at least
_possible_ for the developer to figure out which test, in which test file,
failed. However, almost the entire stack trace is meaningless and full of
plugin code instead of containing information useful to debugging the test.
Additionally, I would _love_ to be able to add the actual test file and
line number for the failure to the report. Would this be possible? Or, at
least, would it be possible to filter the plugin code out of the stack
trace so that it doesn't distract the developer from simply viewing the
assertion error details? What approach should I take to achieve this?

Thanks in advance for any help you can provide,

Nick
_______________________________________________
pytest-dev mailing list
pytest-dev@python.org
https://mail.python.org/mailman/listinfo/pytest-dev

Reply via email to