The problem is I don't actually use pure unittest, I have a custom runner
on top of the unittest.
The requested feature in the subject.
On Sat, Apr 24, 2021, 00:35 Jonathan Rogers
wrote:
> On 4/23/21 5:26 PM, Nick N wrote:
> > Thank you Jonathan!
> >
> > Sure, pytest is more advanced
On 4/23/21 5:26 PM, Nick N wrote:
Thank you Jonathan!
Sure, pytest is more advanced framework and provide more features. I will
definitely rewrite tests to pytest when my boss give me time. Though it doesn't
provide requested feature.
What prevents you from using pytest with your existing
Thank you Jonathan!
Sure, pytest is more advanced framework and provide more features. I will
definitely rewrite tests to pytest when my boss give me time. Though it
doesn't provide requested feature.
By the way unless there is better solution I've implemented a workaround.
My framework just
I suggest you start using a test runner which can produce JUnit/XUnit
output. I've used both nose and pytest, both of which can run unittest
tests and produce JUnit-style XML output.
On Wednesday, April 21, 2021 at 8:59:36 AM UTC-4 nikola...@gmail.com wrote:
> Thank you for the answer. If I
Thank you for the answer. If I don't mistake, JUnit plugin just publishes
the report, but unittest can't produce JUint-compatible xml. Anyway, we
agree to consider builds with failed tests like failed. Tests generate some
artifacts, and I want to know all artifacts exist before trying to
Typically you do not set the status of a build by setting the return code. Your
build should never alter the return code just because the tests are failing. So
I would suggest that your build always returns 0 if the build is successful
(ignoring the test results). The JUnit plugin will do the