Hello,
I would like to make sure that a test fails when it results in an
uncaught exception - even if the exception happens in a destructor or in
a separate thread.
What's the best way to do this?
My idea is to modify sys.excepthook such that it keeps a list of all the
exceptions encountered dur
On 06/01/2013 06:42 PM, Adam Goucher wrote:
>> I would like to make sure that a test fails when it results in an
>> uncaught exception - even if the exception happens in a destructor or in
>> a separate thread.
>>
>> What's the best way to do this?
>>
>> My idea is to modify sys.excepthook such tha
On 06/03/2013 02:42 AM, holger krekel wrote:
> Hi Nikolaus,
>
> On Sat, Jun 01, 2013 at 14:41 -0700, Nikolaus Rath wrote:
>> Hello,
>>
>> I would like to make sure that a test fails when it results in an
>> uncaught exception - even if the exception happens in
Hello,
I have a problem accessing the captured stderr. I have set up the
following test:
$ ls mytest/
conftest.py test_one.py
$ cat mytest/conftest.py
import pytest
@pytest.fixture(autouse=True)
def add_stderr_check(request, capsys):
def check_stderr():
stderr = capsys.readouterr()
Hello,
I don't understand why the following test fixture works for the test
function, but fails for the test method. Am I doing something wrong, or
is this a bug?
$ cat t0_mine.py
import pytest
import unittest
@pytest.yield_fixture(params=(0,1,2))
def param1(request):
if request.param == 1:
if self.param2[0] == 1:
assert self.param1a == 0
else:
assert self.param1a == 7
Thanks,
-Nikolaus
"Florian Schulze" writes:
> Hi!
>
> You didn't include the test_method, maybe you just forgot the 'self'
> as first parameter?
>
Hello,
I am trying to debugging a rather peculiar failure of one of my test
cases. I nailed it down to a function being called by a thread that
shouldn't be calling it, and without any apparent caller.
When I set a breakpoint in the function using pdb, I get the following
traceback:
> /home/nikr
Hello,
I have a parametrized fixture foo, and some test cases that use it:
@pytest.fixture(params=(1,2,3,4,5))
def foo(request):
# ...
return foo_f
def test_1(foo):
# ...
def test_2(foo):
#
Now, I'd like to add an additional test, but running it only makes sense
for some o
Carl Meyer writes:
> Hi Nikolaus,
>
> On 11/26/2014 01:21 PM, Nikolaus Rath wrote:
>> I have a parametrized fixture foo, and some test cases that use it:
>>
>> @pytest.fixture(params=(1,2,3,4,5))
>> def foo(request):
>> # ...
&g
Hello,
Is there a way to access test markers sin the pytest_generate_tests
hook?
For example:
@pytest.mark.bla
def test_one():
# ...
def pytest_generate_tests(metafunc):
if metafunc_has_bla_mark(metafunc):
# do something
But what do you have to use for metafunc_has_bla_mark? Unfo
Hello,
I have a test that starts a background process, and then checks its
response to various queries. Now, I would like to test for particular
bug where some query causes the server to crash. However, there's an
interesting problem with this:
I when running with capturing disabled, I can see t
On Feb 17 2016, Nikolaus Rath wrote:
> If I explicitly ignore the test failure, I am able to see the full
> captured output. In other words, if
>
> def my_test():
> check_server_response()
>
> becomes
>
> def my_test():
> try:
>che
On Feb 18 2016, Bruno Oliveira
wrote:
> Hi Nokolaus,
>
> On Wed, Feb 17, 2016 at 7:07 PM Nikolaus Rath
> wrote:
>
>> Is there a way to add a delay that is (1) only incurred when a test
>> fails and (2) executed right after the test fails (ie., before the capfd
>
Hello,
Is there a way to access a test's fixtures from the pytest_runtest_call
hook?
Specifically, I'd like to access the capfd and caplog fixtures to take a
peek at the captured output and cause the test to fail if there is
anything suspicious.
At the moment, I am checking the captured output b
Hello,
I am testing a server. I am starting the server in a subprocess from a
fixture, and then issue requests to it from various tests. Unfortunately
this seems to have one big drawback: when a test fails, the capture
plugin does not display any output from the server process.
After looking at t
On May 28 2016, holger krekel
wrote:
> Hi Nikolaus,
>
> On Thu, May 19, 2016 at 12:34 -0700, Nikolaus Rath wrote:
>> Hello,
>>
>> I am testing a server. I am starting the server in a subprocess from a
>> fixture, and then issue requests to it from various test
On Jun 01 2016, holger krekel wrote:
> On Tue, May 31, 2016 at 10:17 -0700, Nikolaus Rath wrote:
>> On May 28 2016, holger krekel
>> wrote:
>> > Hi Nikolaus,
>> >
>> > On Thu, May 19, 2016 at 12:34 -0700, Nikolaus Rath wrote:
>> >> Hello,
On Jun 10 2016, Florian Bruhin
wrote:
> In no particular order:
>
> pytest
> - has a weirdly structured documentation
Very much agree.
> pytest-catchlog
> - is still not in pytest-dev or core or renamed to -logging
> - still doesn't have pytest-logging functionality integrated
>
On Jun 10 2016, Florian Bruhin
wrote:
> * Nikolaus Rath [2016-06-10 09:26:34
> -0700]:
>> On Jun 10 2016, Florian Bruhin
>> wrote:
>> > pytest-catchlog
>> > [...]
>> > - has no way to fail tests on unexpected error logging messages
Hello,
I am currently using code like this:
def pytest_generate_tests(metafunc, _info_cache=[]):
# [...]
fn = metafunc.function
do_something(fn.with_backend.kwargs)
for spec in fn.with_backend.args:
# [...]
This has started to emit a warning:
/home/nikratio/in-pro
On Fri, 21 Sep 2018, at 23:06, Ronny Pfannschmidt wrote:
> Am Freitag, den 21.09.2018, 12:44 +0100 schrieb Nikolaus Rath:
> > Hello,
> >
> > I am currently using code like this:
> >
> > def pytest_generate_tests(metafunc, _info_cache=[]):
> > # [...]
21 matches
Mail list logo