unittest test discovery: regular packages vs. namespace packages

2020-07-10 Thread Ralf M.

Hello,

to my last question I got many helpful and enlightening answers.
So I try again with a completely different topic.

https://docs.python.org/3/library/unittest.html#test-discovery
says about test discovery:

"Unittest supports simple test discovery. In order to be compatible with 
test discovery, all of the test files must be modules or packages 
(including namespace packages) importable from the top-level directory 
of the project (this means that their filenames must be valid identifiers).

[...]
Note: As a shortcut, python -m unittest is the equivalent of python -m 
unittest discover."


Therefore I expected
python -m unittest
to run all tests in the current directory and its subdirectories, 
regardless of whether the subdirectories contain a __init__.py or not.


However, this only works for me if an (empty) __init__.py is present, 
i.e. the package is a regular one. As soon as I delete the __init__.py, 
turning the regular package into a namespace package, unittest doesn't 
find the test files any more. When I recreate __init__.py, the test file 
(e.g. test_demo.py) is found again. See demo session further down.


I tried the following Python versions, all showed the same behavior:
 Python 3.7.1 (python.org) on Win10
 Python 3.7.4 (python.org) on Win7
 Python 3.7.7 (Anaconda) on Win10
 Python 3.8.3 (Anaconda) on Win10

What am I missing / misunderstanding?


Demo session:

F:\demo>dir /s /b
F:\demo\test_we4n7uke5vx
F:\demo\test_we4n7uke5vx\test_demo.py
F:\demo\test_we4n7uke5vx\__init__.py

F:\demo>type test_we4n7uke5vx\__init__.py

F:\demo>type test_we4n7uke5vx\test_demo.py
import unittest
class SomeTestCase(unittest.TestCase):
def test_fail_always(self):
self.assertTrue(False)

F:\demo>py -m unittest
F
==
FAIL: test_fail_always (test_we4n7uke5vx.test_demo.SomeTestCase)
--
Traceback (most recent call last):
  File "F:\demo\test_we4n7uke5vx\test_demo.py", line 4, in test_fail_always
self.assertTrue(False)
AssertionError: False is not true

--
Ran 1 test in 0.001s

FAILED (failures=1)

F:\demo>del test_we4n7uke5vx\__init__.py

F:\demo>py -m unittest

--
Ran 0 tests in 0.000s

OK

F:\demo>echo # > test_we4n7uke5vx\__init__.py

F:\demo>py -m unittest
F
==
FAIL: test_fail_always (test_we4n7uke5vx.test_demo.SomeTestCase)
--
Traceback (most recent call last):
  File "F:\demo\test_we4n7uke5vx\test_demo.py", line 4, in test_fail_always
self.assertTrue(False)
AssertionError: False is not true

--
Ran 1 test in 0.001s

FAILED (failures=1)

F:\demo>
--
https://mail.python.org/mailman/listinfo/python-list


Re: Problem while integrating unittest with setuptools

2019-09-02 Thread YuXuan Dong
Finally I found why my setup.py dosen't work. I didn't put a `__init__.py` in 
my `test` folder, thus Python dosen't think it's a package. That's why it found 
the wrong package.

Thank you.

On Mon, Sep 02, 2019 at 04:28:50PM +0200, dieter wrote:
> YuXuan Dong  writes:
> > I have uninstalled `six` using `pip uninstall six` but the problem is still 
> > there.
> 
> Your traceback shows that `six` does not cause your problem.
> It is quite obvious that a `test_winreg` will want to load the
> `wingreg` module.
> 
> > As you suggested, I have checked the traceback and found the exception is 
> > caused by 
> > `/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/test/test_winreg.py`
> >  on my machine with homebrew-installed Python3.
> >
> > I have realized that the problem may be caused of `test-suite=test` in my 
> > `setup.py`. `setuptools` will find the `test` package. But the package it 
> > found is not in my project. It found 
> > `/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/test`.
> 
> Yes. Apparently, you run the Python test suite - and depending on
> platform and available infrastructure some of its tests fail.
> 
> In my package `dm.xmlsec.binding` I use
> `test_suite='dm.xmlsec.binding.tests.testsuite'`.
> Maybe, you can ensure to get the correct test suite in a similar
> way for your package.
> 
> -- 
> https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Problem while integrating unittest with setuptools

2019-09-02 Thread dieter
YuXuan Dong  writes:
> I have uninstalled `six` using `pip uninstall six` but the problem is still 
> there.

Your traceback shows that `six` does not cause your problem.
It is quite obvious that a `test_winreg` will want to load the
`wingreg` module.

> As you suggested, I have checked the traceback and found the exception is 
> caused by 
> `/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/test/test_winreg.py`
>  on my machine with homebrew-installed Python3.
>
> I have realized that the problem may be caused of `test-suite=test` in my 
> `setup.py`. `setuptools` will find the `test` package. But the package it 
> found is not in my project. It found 
> `/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/test`.

Yes. Apparently, you run the Python test suite - and depending on
platform and available infrastructure some of its tests fail.

In my package `dm.xmlsec.binding` I use
`test_suite='dm.xmlsec.binding.tests.testsuite'`.
Maybe, you can ensure to get the correct test suite in a similar
way for your package.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Problem while integrating unittest with setuptools

2019-09-02 Thread YuXuan Dong
Thank you. It helps.

I have uninstalled `six` using `pip uninstall six` but the problem is still 
there.

As you suggested, I have checked the traceback and found the exception is 
caused by 
`/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/test/test_winreg.py`
 on my machine with homebrew-installed Python3.

I have realized that the problem may be caused of `test-suite=test` in my 
`setup.py`. `setuptools` will find the `test` package. But the package it found 
is not in my project. It found 
`/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/test`.

To verify my guess, I renamed the `test` folder in the above folder. As 
expected, another exception is raised.

The problem now is that, how could I config my `setup.py` to work properly?

On 2019/9/2, 13:07, "Python-list on behalf of dieter" 
 wrote:

YuXuan Dong  writes:
> I met a problem while I ran `python setup.py test`: 
>
>   unittest.case.SkipTest: No module named 'winreg'
> ... no windows modules should be necessary ...

I know apparently unexplainable "no module named ..." messages as
a side effect of the use of "six".

"six" is used to facilitate the development of components usable
for both Python 2 and Python 3. Among others, it contains the
module "six.moves" which uses advance Python features to allow
the import of Python modules from different locations in the
package hierarchy and also handles name changes. This can
confuse other components using introspection (they see
modules in "six.move" which are not really available).

To find out if something like this is the cause of your problem,
please try to get a traceback. It might be necessary to
use a different test runner for this (I like much "zope.testrunner"
and my wrapper "dm.zopepatches.ztest").

-- 
https://mail.python.org/mailman/listinfo/python-list


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Need help: integrating unittest with setuptools

2019-09-02 Thread YuXuan Dong
No, it doesn't. The stackoverflow question you posted is about the renaming of 
`winreg`.  `_winreg`  is renamed to `winreg`. That's why the poster can't find 
the module.  My program is written for and running on unix-like systems. I 
think `winreg` should not appear here. I have tried running `pip3 install 
winreg` on MacOS and I got: `Could not find a version that satisfies the 
requirement winreg`.

On 2019/9/2, 08:11, "Python-list on behalf of Sayth Renshaw" 
 wrote:

On Monday, 2 September 2019 04:44:29 UTC+10, YuXuan Dong  wrote:
> Hi, everybody:
> 
> I have met a problem while I ran `python setup.py test`:
> 
>   unittest.case.SkipTest: No module named 'winreg'
> 
> I ran the command in MacOS and my project is written for only UNIX-like 
systems. I don't use any Windows-specified API. How dose `winreg` come here?
> 
> In my `setup.py`:
> 
>   test_suite="test"
> 
> In my `test/test.py`:
> 
>   import unittest
> 
>   class TestAll(unittest.TestCase):
>   def testall(self):
>   return None
> 
> It works if I ran `python -m uniittest test.py` alone but raises the 
above exception if I ran `python setup.py test`.
> 
> I'm working on this for the whole day, searching for every keywords I can 
think of with Google but can't find why or how. Could you help me? Thanks.
> 
> --
> YX. D.

Does this help?

https://stackoverflow.com/questions/4320761/importerror-no-module-named-winreg-python3

Sayth
-- 
https://mail.python.org/mailman/listinfo/python-list


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Problem while integrating unittest with setuptools

2019-09-01 Thread dieter
YuXuan Dong  writes:
> I met a problem while I ran `python setup.py test`: 
>
>   unittest.case.SkipTest: No module named 'winreg'
> ... no windows modules should be necessary ...

I know apparently unexplainable "no module named ..." messages as
a side effect of the use of "six".

"six" is used to facilitate the development of components usable
for both Python 2 and Python 3. Among others, it contains the
module "six.moves" which uses advance Python features to allow
the import of Python modules from different locations in the
package hierarchy and also handles name changes. This can
confuse other components using introspection (they see
modules in "six.move" which are not really available).

To find out if something like this is the cause of your problem,
please try to get a traceback. It might be necessary to
use a different test runner for this (I like much "zope.testrunner"
and my wrapper "dm.zopepatches.ztest").

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Need help: integrating unittest with setuptools

2019-09-01 Thread Sayth Renshaw
On Monday, 2 September 2019 04:44:29 UTC+10, YuXuan Dong  wrote:
> Hi, everybody:
> 
> I have met a problem while I ran `python setup.py test`:
> 
>   unittest.case.SkipTest: No module named 'winreg'
> 
> I ran the command in MacOS and my project is written for only UNIX-like 
> systems. I don't use any Windows-specified API. How dose `winreg` come here?
> 
> In my `setup.py`:
> 
>   test_suite="test"
> 
> In my `test/test.py`:
> 
>   import unittest
> 
>   class TestAll(unittest.TestCase):
>   def testall(self):
>   return None
> 
> It works if I ran `python -m uniittest test.py` alone but raises the above 
> exception if I ran `python setup.py test`.
> 
> I'm working on this for the whole day, searching for every keywords I can 
> think of with Google but can't find why or how. Could you help me? Thanks.
> 
> --
> YX. D.

Does this help?
https://stackoverflow.com/questions/4320761/importerror-no-module-named-winreg-python3

Sayth
-- 
https://mail.python.org/mailman/listinfo/python-list


Need help: integrating unittest with setuptools

2019-09-01 Thread YuXuan Dong
Hi, everybody:

I have met a problem while I ran `python setup.py test`:

unittest.case.SkipTest: No module named 'winreg'

I ran the command in MacOS and my project is written for only UNIX-like 
systems. I don't use any Windows-specified API. How dose `winreg` come here?

In my `setup.py`:

test_suite="test"

In my `test/test.py`:

import unittest

class TestAll(unittest.TestCase):
def testall(self):
return None

It works if I ran `python -m uniittest test.py` alone but raises the above 
exception if I ran `python setup.py test`.

I'm working on this for the whole day, searching for every keywords I can think 
of with Google but can't find why or how. Could you help me? Thanks.

--
YX. D.

-- 
https://mail.python.org/mailman/listinfo/python-list


Problem while integrating unittest with setuptools

2019-09-01 Thread YuXuan Dong
Hi, everybody:

I met a problem while I ran `python setup.py test`: 

unittest.case.SkipTest: No module named 'winreg'

I ran the command in MacOS and my project is written for only UNIX-like 
systems. I don't use any Windows-specified API. How dose `winreg` come here?

In my `setup.py`:

test_suite="test"

In my `test/test.py`:

import unittest

class TestAll(unittest.TestCase):
def testall(self):
return None

I'm working on this for the whole day, searched for every keywords I can think 
of with Google but can't find why or how. Could you help me? Thanks.

--
YX. D.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benchmarking Django on PyPy with unittest?

2018-02-08 Thread Etienne Robillard


Hi Dan,

Thank you for your reply.

I never used the requests module, but it looks like a very good pick to 
test the Django wsgi handler. :-)


Currently my tests/benchmarks folder looks like this:

tests/benchmarks/lib

tests/benchmarks/lib/django_sqlite # Django 1.11 specific project directory

tests/benchmarks/lib/django_sqlite/myapp # Test app for benchmarking 
templates, views, and core django api


tests/benchmarks/lib/django_sqlite/polls # Tutorial app for benchmarking 
django orm/sqlite on pypy and cpython


tests/benchmarks/uwsgi # Testsuite for uWSGI

tests/benchmarks/django1 # Testsuite for Django 1.11

tests/benchmarks/django2 # Testsuite for Django 2

To run the benchmarks testsuite on pypy:


$ cd tests
$ source ./djangorc
$ pypy ./run.py -C benchmarks/uwsgi # Run testsuite for the uwsgi handler
$ pypy ./run.py -C benchmarks/django1 # Run testsuite for the django 
1.11 api
$ pypy ./run.py -C benchmarks/django2 # Run testsuite for the django 2.0 
api


Ideally, i would like to compile benchmarks data, but I have not yet 
understood how to do this. :-)



Cheers,

Etienne


2018-02-07 à 19:10, Dan Stromberg a écrit :

You could probably use the "requests" module to time how long various
operations take in your Django website.

On Wed, Feb 7, 2018 at 2:26 AM, Etienne Robillard  wrote:

Also, i need to isolate and measure the speed of gevent loop engine
(gevent.monkey), epoll, and python-specific asyncio coroutines. :-)

Etienne



Le 2018-02-07 à 04:39, Etienne Robillard a écrit :

Hi,

is it possible to benchmark a django application  with unittest module in
order to compare and measure the speed/latency of the django orm with
sqlite3 against ZODB databases?
i'm interested in comparing raw sqlite3 performance versus ZODB (schevo).
i would like to make specific testsuite(s) for benchmarking django 1.11.7,
django 2.0, pypy, etc.

What do you think?

Etienne


--
Etienne Robillard
tkad...@yandex.com
https://www.isotopesoftware.ca/

--
https://mail.python.org/mailman/listinfo/python-list


--
Etienne Robillard
tkad...@yandex.com
https://www.isotopesoftware.ca/

--
https://mail.python.org/mailman/listinfo/python-list


Re: Benchmarking Django on PyPy with unittest?

2018-02-07 Thread Dan Stromberg
You could probably use the "requests" module to time how long various
operations take in your Django website.

On Wed, Feb 7, 2018 at 2:26 AM, Etienne Robillard  wrote:
> Also, i need to isolate and measure the speed of gevent loop engine
> (gevent.monkey), epoll, and python-specific asyncio coroutines. :-)
>
> Etienne
>
>
>
> Le 2018-02-07 à 04:39, Etienne Robillard a écrit :
>>
>> Hi,
>>
>> is it possible to benchmark a django application  with unittest module in
>> order to compare and measure the speed/latency of the django orm with
>> sqlite3 against ZODB databases?
>> i'm interested in comparing raw sqlite3 performance versus ZODB (schevo).
>> i would like to make specific testsuite(s) for benchmarking django 1.11.7,
>> django 2.0, pypy, etc.
>>
>> What do you think?
>>
>> Etienne
>>
>
> --
> Etienne Robillard
> tkad...@yandex.com
> https://www.isotopesoftware.ca/
>
> --
> https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Benchmarking Django on PyPy with unittest?

2018-02-07 Thread Etienne Robillard
Also, i need to isolate and measure the speed of gevent loop engine 
(gevent.monkey), epoll, and python-specific asyncio coroutines. :-)


Etienne


Le 2018-02-07 à 04:39, Etienne Robillard a écrit :

Hi,

is it possible to benchmark a django application  with unittest module 
in order to compare and measure the speed/latency of the django orm 
with sqlite3 against ZODB databases?
i'm interested in comparing raw sqlite3 performance versus ZODB 
(schevo). i would like to make specific testsuite(s) for benchmarking 
django 1.11.7, django 2.0, pypy, etc.


What do you think?

Etienne



--
Etienne Robillard
tkad...@yandex.com
https://www.isotopesoftware.ca/

--
https://mail.python.org/mailman/listinfo/python-list


Benchmarking Django on PyPy with unittest?

2018-02-07 Thread Etienne Robillard

Hi,

is it possible to benchmark a django application  with unittest module 
in order to compare and measure the speed/latency of the django orm with 
sqlite3 against ZODB databases?
i'm interested in comparing raw sqlite3 performance versus ZODB 
(schevo). i would like to make specific testsuite(s) for benchmarking 
django 1.11.7, django 2.0, pypy, etc.


What do you think?

Etienne

--
Etienne Robillard
tkad...@yandex.com
https://www.isotopesoftware.ca/

--
https://mail.python.org/mailman/listinfo/python-list


Re: Problem getting unittest tests for existing project working

2017-07-03 Thread Peter Otten
Aaron Gray wrote:

> I am trying to get distorm3's unittests working but to no avail.
> 
> I am not really a Python programmer so was hoping someone in the know
> maybe able to fix this for me.

Normally it doesn't work that way...
> 
> Here's a GitHub issue I have created for the bug :-
> 
> https://github.com/gdabah/distorm/issues/118

...though this time it does:

diff --git a/examples/tests/test_distorm3.py 
b/examples/tests/test_distorm3.py
index aec1d63..babaacc 100644
--- a/examples/tests/test_distorm3.py
+++ b/examples/tests/test_distorm3.py
@@ -43,11 +43,18 @@ def Assemble(text, mode):
mode = "amd64"
else:
mode = "x86"
-   os.system("yasm.exe -m%s 1.asm" % mode)
+   os.system("yasm -m%s 1.asm" % mode)
return open("1", "rb").read()
 
-class InstBin(unittest.TestCase):
+class NoTest(unittest.TestCase):
+def __init__(self):
+unittest.TestCase.__init__(self, "test_dummy")
+def test_dummy(self):
+self.fail("dummy")
+
+class InstBin(NoTest):
def __init__(self, bin, mode):
+NoTest.__init__(self)
bin = bin.decode("hex")
#fbin[mode].write(bin)
self.insts = Decompose(0, bin, mode) 
@@ -61,8 +68,9 @@ class InstBin(unittest.TestCase):
self.assertNotEqual(self.inst.rawFlags, 65535)
self.assertEqual(self.insts[instNo].mnemonic, mnemonic)
 
-class Inst(unittest.TestCase):
+class Inst(NoTest):
def __init__(self, instText, mode, instNo, features):
+NoTest.__init__(self)
modeSize = [16, 32, 64][mode]
bin = Assemble(instText, modeSize)
#print map(lambda x: hex(ord(x)), bin)

Notes:

(1) This is a hack; the original code was probably written against an older 
version of unittest and is abusing the framework to some extent. The dummy 
tests I introduced are ugly but should do no harm. 

(2) I tried this on linux only and thus changed 'yasm.exe' to 'yasm' 
unconditionally.

-- 
https://mail.python.org/mailman/listinfo/python-list


Problem getting unittest tests for existing project working

2017-07-02 Thread Aaron Gray
I am trying to get distorm3's unittests working but to no avail.

I am not really a Python programmer so was hoping someone in the know maybe
able to fix this for me.

Here's a GitHub issue I have created for the bug :-

https://github.com/gdabah/distorm/issues/118


-- 
Aaron Gray

Independent Open Source Software Engineer, Computer Language Researcher,
Information Theorist, and amateur computer scientist.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Does automatic golden master unittest generation exist/or is it feasible?

2017-04-05 Thread dieter
fle...@gmail.com writes:

> I have a really large and mature codebase in py2, but with no test or 
> documentation.
>
> To resolve this I just had a simple idea to automatically generate tests and 
> this is how:
>
> 1. Have a decorator that logs all arguments and return values
>
> 2. Put them in a test case
>
> and have it run in production.

This works only for quite simple cases -- cases without state.

If you have something with state, than some function calls
may change the state and thereby changing the effect of later
function calls. This means: the same function call (same function,
same arguments) may give different results over time.

As a consequence, the initial state and the execution order
becomes important.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Does automatic golden master unittest generation exist/or is it feasible?

2017-04-04 Thread Steven D'Aprano
On Tue, 04 Apr 2017 19:40:03 -0700, fleshw wrote:

> People told me that there's a similar concept called "Golden Master
> Testing" where it keeps a "golden record" and runs test case on it. So
> it looks like I'm trying to automate golden master testing.
> 
> So my question is this:
> 
> 1. Is this a concept already explored?

https://duckduckgo.com/?q=Golden+Master+Testing

https://www.google.com.au/search?q=Golden+Master+Testing


Gold master testing is not a substitute for unit tests. The purpose of 
gold master testing is different: not the ensure that the code is 
correct, but to ensure that the behaviour *doesn't change*. Or at least 
not without a human review to ensure the change is desired.

You would use gold master testing when you have a legacy code base that 
needs refactoring. You don't care so much about fixing bugs (either there 
aren't any known bugs, or you have to keep them for backwards 
compatibility, or simply there's no budget for fixing them yet) but you 
care about keeping the behaviour unchanged as you refactor the code.

If that describes your situation, then gold master testing is worth 
pursuing. If not, then forget about it. Just write some unit tests.

I like small functions that take some arguments, do one thing, and return 
a result without any side-effects. They are very easy to test using 
doctests. Doctests make good documentation too, so you can kill two birds 
with one stone. Write doctests.

For anything too complicated for a doctest, or for extensive and detailed 
functional tests that exercise all the corners of your function, write 
unit tests. The `unittest` module can run your doctests too, 
automatically turning your doctests into a test case.

If your code base is well-designed, with lots of encapsulation, then 
writing unit tests are easy.

If its not... well, you have my sympathy.


 
> 2. Is there already a framework for doing this?

This came up in googling. I make no comment about its maturity, 
functionality, bugginess or overall quality, as I have never used it.

https://github.com/approvals/ApprovalTests.Python



-- 
Steve
-- 
https://mail.python.org/mailman/listinfo/python-list


Does automatic golden master unittest generation exist/or is it feasible?

2017-04-04 Thread fleshw
I have a really large and mature codebase in py2, but with no test or 
documentation.

To resolve this I just had a simple idea to automatically generate tests and 
this is how:

1. Have a decorator that logs all arguments and return values

2. Put them in a test case

and have it run in production.

Here's a quick prototype I just wrote, just to illustrate what I'm trying to 
achieve:

http://codepad.org/bkHRbZ4R

People told me that there's a similar concept called "Golden Master Testing" 
where it keeps a "golden record" and runs test case on it. So it looks like I'm 
trying to automate golden master testing.

So my question is this: 

1. Is this a concept already explored?

2. Is there already a framework for doing this? 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unittest

2016-07-25 Thread Terry Reedy

On 7/25/2016 12:45 PM, Joaquin Alzola wrote:

Hi Guys

I have a question related to unittest.

I suppose a SW that is going to live will not have any trace of
unittest module along their code.


In order to test idlelib, I had to a _utest=False (unittest = False) 
parameter to some functions.  They are there when you run IDLE.


I like to put

if __name__ == '__main__':  at the bottom 
of non-script files.  Some people don't like this, but it makes running 
the tests trivial while editing a file -- whether to make a test pass or 
to avoid regressions when making 'neutral' changes.



So is it the way to do it to put all unittest in a preproduction
environment and then remove all lines relate to unittest once the SW
is release into production?


How would you know that you do not introduce bugs when you change code 
after testing?


When you install Python on Windows, installing the test/ directory is a 
user option.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Unittest

2016-07-25 Thread Ben Finney
Joaquin Alzola  writes:

> I suppose a SW that is going to live will not have any trace of
> unittest module along their code.

Many packages are deployed with their unit test suite. The files don't
occupy much space, don't interfere with the running of the program, and
can be helpful to run the tests in the deployed environment.

> So is it the way to do it to put all unittest in a preproduction
> environment and then remove all lines relate to unittest once the SW
> is release into production?

I would advise not to bother. Prepare the release of the entire source
needed to build the distribution, and don't worry about somehow
excluding the test suite.

> This email is confidential and may be subject to privilege. If you are not 
> the intended recipient, please do not copy or disclose its content but 
> contact the sender immediately upon receipt.

Please do not use an email system which appends these obnoxious messages
in a public forum.

Either convince the people who impose that false disclaimer onto your
message to stop doing that; or, stop using that system for writing to a
public forum.

-- 
 \   “Are you pondering what I'm pondering?” “Umm, I think so, Don |
  `\  Cerebro, but, umm, why would Sophia Loren do a musical?” |
_o__)   —_Pinky and The Brain_ |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unittest

2016-07-25 Thread Brendan Abel
Generally, all your unittests will be inside a "tests" directory that lives
outside your package directory.  That directory will be excluded when you
build or install your project using your setup.py script.  Take a look at
some popular 3rd party python packages to see how they structure their
projects and setup their setup.py.

On Mon, Jul 25, 2016 at 9:45 AM, Joaquin Alzola 
wrote:

> Hi Guys
>
> I have a question related to unittest.
>
> I suppose a SW that is going to live will not have any trace of unittest
> module along their code.
>
> So is it the way to do it to put all unittest in a preproduction
> environment and then remove all lines relate to unittest once the SW is
> release into production?
>
> What is the best way of working with unittest?
>
> BR
>
> Joaquin
>
> This email is confidential and may be subject to privilege. If you are not
> the intended recipient, please do not copy or disclose its content but
> contact the sender immediately upon receipt.
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Unittest

2016-07-25 Thread Joaquin Alzola
Hi Guys

I have a question related to unittest.

I suppose a SW that is going to live will not have any trace of unittest module 
along their code.

So is it the way to do it to put all unittest in a preproduction environment 
and then remove all lines relate to unittest once the SW is release into 
production?

What is the best way of working with unittest?

BR

Joaquin

This email is confidential and may be subject to privilege. If you are not the 
intended recipient, please do not copy or disclose its content but contact the 
sender immediately upon receipt.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Skipping test using unittest SkipTest and exit status

2016-05-14 Thread Chris Angelico
On Sun, May 15, 2016 at 12:28 AM, Ganesh Pal  wrote:
> The script show the below output , this looks fine for me. Do you see any
> problems with this ?
>
> gpal-ae9703e-1# python unitest1.py
> ERROR:root:Failed scanning
> E
> ==
> ERROR: setUpClass (__main__.ScanTest)
> --
> Traceback (most recent call last):
>   File "unitest1.py", line 20, in setUpClass
> raise unittest.TestCase.failureException("class setup failed")
> AssertionError: class setup failed
>
> --
> Ran 0 tests in 0.000s
>
> FAILED (errors=1)
>
> 2. I find  assert and raise RunTimeError also fitting my program  ,please
> suggest whats best  form unittest fixture point of view.
>
>
>  if not self.scan:
> logging.error("Failed scanning ")
> assert False, "Class setup failed skipping test"
>
> if not self.scan:
> logging.error("Failed scanning ")
> raise  RuntimeError.
>
>  My overall ,idea is Setup class fails then don't run any of the next
> statements and exit the tests.

There are three quite different things happening in your three examples.

1) When a test *fails*, it means that the test ran successfully, but
didn't give the expected result. (This is sometimes a good thing - for
example, in test-driven development, you first create a test, then run
the test suite and see that it correctly fails, and then implement the
feature or fix, at which point the test suite will pass.) A test
failure happens when you call assertEqual with two unequal values, for
instance.

2) A test *exception* is generally a failure of the test suite itself.
It means that your code ran, but something went wrong, and an
exception was raised. That's what happens when you raise arbitrary
exceptions, including RuntimeError and AssertionError.

3) An *assertion failure* is (conceptually) an error in some
function's preconditions/postconditions, or a complete and utter
muck-up in implementation. It's something that should never actually
happen. It's something where you're legitimately allowed to skip over
every assert statement - in fact, that's exactly what happens when you
run Python in optimized mode.

Every exception has its purpose. With a lot of them, you can find out
that purpose by looking at its docstring:

>>> import builtins
>>> for name in dir(builtins):
... obj = getattr(builtins, name)
... if isinstance(obj, type) and issubclass(obj, BaseException):
... print("%s: %s" % (obj.__name__, obj.__doc__.split("\n")[0]))
...
ArithmeticError: Base class for arithmetic errors.
AssertionError: Assertion failed.
AttributeError: Attribute not found.
BaseException: Common base class for all exceptions
BlockingIOError: I/O operation would block.
BrokenPipeError: Broken pipe.
BufferError: Buffer error.
BytesWarning: Base class for warnings about bytes and buffer related
problems, mostly
ChildProcessError: Child process error.
ConnectionAbortedError: Connection aborted.
ConnectionError: Connection error.
ConnectionRefusedError: Connection refused.
ConnectionResetError: Connection reset.
DeprecationWarning: Base class for warnings about deprecated features.
EOFError: Read beyond end of file.
OSError: Base class for I/O related errors.
Exception: Common base class for all non-exit exceptions.
FileExistsError: File already exists.
FileNotFoundError: File not found.
FloatingPointError: Floating point operation failed.
FutureWarning: Base class for warnings about constructs that will
change semantically
GeneratorExit: Request that a generator exit.
OSError: Base class for I/O related errors.
ImportError: Import can't find module, or can't find name in module.
ImportWarning: Base class for warnings about probable mistakes in module imports
IndentationError: Improper indentation.
IndexError: Sequence index out of range.
InterruptedError: Interrupted by signal.
IsADirectoryError: Operation doesn't work on directories.
KeyError: Mapping key not found.
KeyboardInterrupt: Program interrupted by user.
LookupError: Base class for lookup errors.
MemoryError: Out of memory.
NameError: Name not found globally.
NotADirectoryError: Operation only works on directories.
NotImplementedError: Method or function hasn't been implemented yet.
OSError: Base class for I/O related errors.
OverflowError: Result too large to be represented.
PendingDeprecationWarning: Base class for warnings about features
which will be deprecated
PermissionError: Not enough permissions.
ProcessLookupError: Process not found.
RecursionError: Recursion limit exceeded.
ReferenceError: Weak ref p

Re: Skipping test using unittest SkipTest and exit status

2016-05-14 Thread Ganesh Pal
>
> > Hi Team,
> >
> > Iam on  python 2.7 and Linux . I need inputs on the below  program  ,
>
> "I am" is two words, not one. I hope you wouldn't write "Youare"
> or "Heis" :-) Whenever you write "Iam", I read it as the name "Ian", which
> is very distracting.
>
>
 I am lazy fellow  and you are smart guy.  just a sentence with few words .
 Take care  :)


> > Iam skipping the unittest  from setUpClass in following way  # raise
> > unittest.SkipTest(message)
> >
> > The test are getting skipped but   I have two problem .
> >
> > (1) This script  is in turn read by other  scripts  which considers the
> > test have passed based on the scripts return code , but the test have
> > actually been skipped   ,  How do include an exit status to indicates
> that
> > the test have failed
>
> But the test *hasn't* failed. A skipped test is not a failed test.
>
> If you want the test to count as failed, you must let it fail. You can use
> the fail() method for that.
>
> https://docs.python.org/2/library/unittest.html#unittest.TestCase.fail
>
>

1.  How about raising failureException :

I was thinking of using  failureException instead of fail() method , If I
replace my code with raise unittest.TestCase.failureException("class setup
failed")

The script show the below output , this looks fine for me. Do you see any
problems with this ?

gpal-ae9703e-1# python unitest1.py
ERROR:root:Failed scanning
E
==
ERROR: setUpClass (__main__.ScanTest)
--
Traceback (most recent call last):
  File "unitest1.py", line 20, in setUpClass
raise unittest.TestCase.failureException("class setup failed")
AssertionError: class setup failed

------
Ran 0 tests in 0.000s

FAILED (errors=1)

2. I find  assert and raise RunTimeError also fitting my program  ,please
suggest whats best  form unittest fixture point of view.


 if not self.scan:
logging.error("Failed scanning ")
assert False, "Class setup failed skipping test"

if not self.scan:
logging.error("Failed scanning ")
raise  RuntimeError.

 My overall ,idea is Setup class fails then don't run any of the next
statements and exit the tests.


Regards,
Ganesh
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Skipping test using unittest SkipTest and exit status

2016-05-13 Thread Steven D'Aprano
On Sat, 14 May 2016 02:23 am, Ganesh Pal wrote:

> Hi Team,
> 
> Iam on  python 2.7 and Linux . I need inputs on the below  program  ,

"I am" is two words, not one. I hope you wouldn't write "Youare"
or "Heis" :-) Whenever you write "Iam", I read it as the name "Ian", which
is very distracting.


> Iam skipping the unittest  from setUpClass in following way  # raise
> unittest.SkipTest(message)
> 
> The test are getting skipped but   I have two problem .
> 
> (1) This script  is in turn read by other  scripts  which considers the
> test have passed based on the scripts return code , but the test have
> actually been skipped   ,  How do include an exit status to indicates that
> the test have failed

But the test *hasn't* failed. A skipped test is not a failed test.

If you want the test to count as failed, you must let it fail. You can use
the fail() method for that.

https://docs.python.org/2/library/unittest.html#unittest.TestCase.fail


> (2) Why is the message  in the raise statement  i.e  raise
>  unittest.SkipTest("Class setup failed skipping test") not  getting
> displayed .


Raising a SkipTest exception is equivalent to calling the skipTest method,
which marks the test as an expected skipped test, not a failure. Since it
is not a failure, it doesn't display the exception.

If you run unittest in verbose mode, the skip message will be displayed. See
the documentation here:

https://docs.python.org/2/library/unittest.html#skipping-tests-and-expected-failures




-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Skipping test using unittest SkipTest and exit status

2016-05-13 Thread Ganesh Pal
Hi Team,

Iam on  python 2.7 and Linux . I need inputs on the below  program  ,

Iam skipping the unittest  from setUpClass in following way  # raise
unittest.SkipTest(message)

The test are getting skipped but   I have two problem .

(1) This script  is in turn read by other  scripts  which considers the
test have passed based on the scripts return code , but the test have
actually been skipped   ,  How do include an exit status to indicates that
the test have failed

(2) Why is the message  in the raise statement  i.e  raise
 unittest.SkipTest("Class setup failed skipping test") not  getting
displayed .

Also thinking if we could replace raise  unittest.SkipTest with assert
statement ?


Sample code:

#!/usr/bin/env python
import unittest
import logging

class ScanTest(unittest.TestCase):

@classmethod
def setUpClass(self):
"""
Initial setup before unittest run
"""
pdb.set_trace()
self.scan = False
if not self.scan:
logging.error("Failed scanning ")
raise  unittest.SkipTest("Class setup failed skipping test")

self.data = True
if not self.data:
logging.error("Failed getting data ")
raise unittest.SkipTest("Class setup failed skipping test")
logging.info("SETUP.Done")

def test_01_inode_scanion(self):
"""  test01: inode scanion """
logging.info("### Executing test01:  ###")

@classmethod
def tearDownClass(self):
""" Cleanup all the data & logs """
logging.info("Cleaning all data")

def main():
""" ---MAIN--- """

try:
unittest.main()
except Exception as e:
logging.exception(e)
sys.exit(1)

if __name__ == '__main__':
main()

Sample output
gpal-ae9703e-1# python unitest1.py
ERROR:root:Failed scanning
s
--
Ran 0 tests in 0.000s

OK (skipped=1)

Regards,
Ganesh
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Evaluating error strings for 'unittest' assert methods.

2016-04-06 Thread Steven D'Aprano
On Thu, 7 Apr 2016 08:58 am, John Pote wrote:

[...]
> I like each assert...() to output helpful information when things go
> wrong. So I have put in quite complicated code to generate the error
> string the assert() method uses only when things go wrong. The normal
> case, when everything is working, means that all these error strings are
> constructed only to be discarded immediately when the assert() detects
> the test result is correct and no exception is throw.
> 
> To my mind this seems a waste and adding unnecessary delays in the
> running of the whole test script.

This sounds like premature optimization. I would be very surprised if this
actually makes much difference to the run time of the test suite, unless
the tests are *really* basic and the error strings *impressively* complex.
Your example below:


> So I was wondering if there was some convienient, Python, way of calling
> an assert() method so the error string is only evaluated/constructed if
> the assert() fails and throws an exception. For example,
> 
> self.assertEqual( (nBytes,expectedValues), (nBytesRd,valuesRead),
>  """Unexpected reg value.
> Expected values nBytes:%02x (%s)
> """%(nBytes,' '.join( [ "%04x"%v for v in expectedValues] )) +
> "Read values nBytes:%02x (%s)"%(nBytesRd,' '.join( [ "%04x"%v for v
> in valuesRead] ))
>  )

doesn't look too complicated too me. So my *guess* is that you are worried
over a tiny proportion of your actual runtime. Obviously I haven't seen
your code, but thinking about my own test suites, I would be shocked if it
was as high as 1% of the total. But I might be wrong.

You might try running the profiler over your test suite and see if it gives
you any useful information, but I suspect not.

Otherwise -- and I realise that this is a lot of work -- I'd consider making
a copy of your test script, then go through the copy and replace every
single one of the error messages with the same short string, say, "x". Now
run the two versions, repeatedly, and time how long they take.

On Linux, I would do something like this (untested):


time python -m unittest test_mymodule > /dev/null 2>&1


the intent being to ignore the overhead of actual printing any error
messages to the screen, and just seeing the execution time. Run that (say)
ten times, and pick the *smallest* runtime. Now do it again with the
modified tests:

time python -m unittest test_mymodule_without_messages > /dev/null 2>&1


My expectation is that if your unit tests do anything like a significant
amount of processing, the difference caused by calculating a few extra
error messages will be insignificant.



But, having said that, what version of Python are you using? Because the
unittest module in Python 3 is *significantly* enhanced and prints much
more detailed error messages without any effort on your part at all.

https://docs.python.org/3/library/unittest.html

For example, starting in Python 3.1, assertEqual() on two strings will
display a multi-line diff of the two strings if the test fails. Likewise,
there are type-specific tests for lists, dicts, etc. which are
automatically called by assertEqual, and the default error message contains
a lot more information than the Python 2 version does.


If you're stuck with Python 2, I *think* that the new improved unittest
module is backported as unittest2:


https://pypi.python.org/pypi/unittest2



-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Evaluating error strings for 'unittest' assert methods.

2016-04-06 Thread Ethan Furman

On 04/06/2016 03:58 PM, John Pote wrote:

I have been writing a very useful test script using the standard Python
'unittest' module. This works fine and has been a huge help in keeping
the system I've been writing fully working even when I make changes that
could break many features of the system. eg major rewrite of the
interrupt routines. The target system in question is written in 'C' and
runs on a 50p microcontroller at the end of a serial line.

I like each assert...() to output helpful information when things go
wrong. So I have put in quite complicated code to generate the error
string the assert() method uses only when things go wrong. The normal
case, when everything is working, means that all these error strings are
constructed only to be discarded immediately when the assert() detects
the test result is correct and no exception is throw.

To my mind this seems a waste and adding unnecessary delays in the
running of the whole test script.

So I was wondering if there was some convienient, Python, way of calling
an assert() method so the error string is only evaluated/constructed if
the assert() fails and throws an exception. For example,

self.assertEqual( (nBytes,expectedValues), (nBytesRd,valuesRead),
 """Unexpected reg value.
Expected values nBytes:%02x (%s)
"""%(nBytes,' '.join( [ "%04x"%v for v in expectedValues] )) +
"Read values nBytes:%02x (%s)"%(nBytesRd,' '.join( [ "%04x"%v for v
in valuesRead] ))
 )


My first thought is don't sweat it, it probably doesn't matter.  You 
could test this by timing both before and after removing all the custom 
error messages.


If you do want to sweat it -- maybe a custom object that replaces the 
__str__ and __repr__ methods with the code that creates the message?  No 
idea if it would work, but it would look something like:


class DelayedStr(object):  # pesky 2/3 code ;)
def __init__(self, func):
self.func = func
def __str__(self):
return self.func()

and your assert would look like:


self.assertEqual(
(nBytes,expectedValues),
(nBytesRd,valuesRead),
lambda : """Unexpected reg value.
Expected values nBytes:%02x (%s)""" %
(nBytes,' '.join( [ "%04x"%v for v in expectedValues] )) +
"Read values nBytes:%02x (%s)"%(nBytesRd,
 ' '.join( [ "%04x"%v for v in valuesRead] )))

Certainly no more readable, but /maybe/ more performant.  (Assuming it 
even works.)


--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Evaluating error strings for 'unittest' assert methods.

2016-04-06 Thread John Pote
I have been writing a very useful test script using the standard Python 
'unittest' module. This works fine and has been a huge help in keeping 
the system I've been writing fully working even when I make changes that 
could break many features of the system. eg major rewrite of the 
interrupt routines. The target system in question is written in 'C' and 
runs on a 50p microcontroller at the end of a serial line.


I like each assert...() to output helpful information when things go 
wrong. So I have put in quite complicated code to generate the error 
string the assert() method uses only when things go wrong. The normal 
case, when everything is working, means that all these error strings are 
constructed only to be discarded immediately when the assert() detects 
the test result is correct and no exception is throw.


To my mind this seems a waste and adding unnecessary delays in the 
running of the whole test script.


So I was wondering if there was some convienient, Python, way of calling 
an assert() method so the error string is only evaluated/constructed if 
the assert() fails and throws an exception. For example,


self.assertEqual( (nBytes,expectedValues), (nBytesRd,valuesRead),
"""Unexpected reg value.
Expected values nBytes:%02x (%s)
"""%(nBytes,' '.join( [ "%04x"%v for v in expectedValues] )) +
"Read values nBytes:%02x (%s)"%(nBytesRd,' '.join( [ "%04x"%v for v 
in valuesRead] ))

)

Ideas invited, thanks everyone,
John
--
https://mail.python.org/mailman/listinfo/python-list


Re: manually build a unittest/doctest object.

2015-12-08 Thread Peter Otten
Vincent Davis wrote:

> On Tue, Dec 8, 2015 at 2:06 AM, Peter Otten <__pete...@web.de> wrote:
> 
>> >>> import doctest
>> >>> example = doctest.Example(
>> ... "print('hello world')\n",
>> ... want="hello world\n")
>> >>> test = doctest.DocTest([example], {}, None, None, None, None)
>> >>> runner = doctest.DocTestRunner(verbose=True)
>> >>> runner.run(test)
>> Trying:
>> print('hello world')
>> Expecting:
>> hello world
>> ok
>> TestResults(failed=0, attempted=1)
>>
> 
> ​and now how to do a multi line statement​.

doctest doesn't do tests with multiple *statements.* A docstring like

"""
>>> x = 42
>>> print(x)
42
"""

is broken into two examples:

>> import doctest
>>> p = doctest.DocTestParser()
>>> doctest.Example.__repr__ = lambda self: "Example(source={0.source!r}, 
want={0.want!r})".format(self)
>>> p.get_examples("""
... >>> x = 42
... >>> print(x)
... 42
... """)
[Example(source='x = 42\n', want=''), Example(source='print(x)\n', 
want='42\n')]


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: manually build a unittest/doctest object.

2015-12-08 Thread Vincent Davis
On Tue, Dec 8, 2015 at 7:30 AM, Laura Creighton  wrote:

> >--
> >https://mail.python.org/mailman/listinfo/python-list
>
> Check out this:
> https://pypi.python.org/pypi/pytest-ipynb
>

​Thanks Laura, I think I read the descript as saying I could run untittests
on source code from a jupyter notebook. Reading closer this seems like it
will work.
Not that I mind learning more about how doctests work ;-)


Vincent Davis
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: manually build a unittest/doctest object.

2015-12-08 Thread Vincent Davis
On Tue, Dec 8, 2015 at 2:06 AM, Peter Otten <__pete...@web.de> wrote:

> >>> import doctest
> >>> example = doctest.Example(
> ... "print('hello world')\n",
> ... want="hello world\n")
> >>> test = doctest.DocTest([example], {}, None, None, None, None)
> >>> runner = doctest.DocTestRunner(verbose=True)
> >>> runner.run(test)
> Trying:
> print('hello world')
> Expecting:
> hello world
> ok
> TestResults(failed=0, attempted=1)
>

​and now how to do a multi line statement​.

>>> import doctest
>>> example =
doctest.Example("print('hello')\nprint('world')",want="hello\nworld")
>>> test = doctest.DocTest([example], {}, None, None, None, None)
>>> runner = doctest.DocTestRunner(verbose=True)
>>> runner.run(test)

Trying:
print('hello')
print('world')
Expecting:
hello
world
**
Line 1, in None
Failed example:
print('hello')
print('world')
Exception raised:
Traceback (most recent call last):
  File "/Users/vincentdavis/anaconda/envs/py35/lib/python3.5/doctest.py",
line 1320, in __run
compileflags, 1), test.globs)
  File "", line 1
print('hello')
 ^
SyntaxError: multiple statements found while compiling a single statement



Vincent Davis
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: manually build a unittest/doctest object.

2015-12-08 Thread Laura Creighton
In a message of Tue, 08 Dec 2015 07:04:39 -0700, Vincent Davis writes:
>On Tue, Dec 8, 2015 at 2:06 AM, Peter Otten <__pete...@web.de> wrote:
>
>> But why would you want to do that?
>
>
>Thanks Peter, I want to do that because I want to test jupyter notebooks.
>​The notebook is in JSON and I can get the source and result out but it was
>unclear to me how to stick this into a test. doctest seemed the simplest
>but maybe there is a better way.
>
>I also tried something like:
>assert exec("""print('hello word')""") == 'hello word'
>
>
>Vincent Davis
>720-301-3003
>-- 
>https://mail.python.org/mailman/listinfo/python-list

Check out this:
https://pypi.python.org/pypi/pytest-ipynb

Laura 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: manually build a unittest/doctest object.

2015-12-08 Thread Chris Angelico
On Wed, Dec 9, 2015 at 1:04 AM, Vincent Davis  wrote:
> I also tried something like:
> assert exec("""print('hello word')""") == 'hello word'

I'm pretty sure exec() always returns None. If you want this to work,
you would need to capture sys.stdout into a string:

import io
import contextlib
output = io.StringIO()
with contextlib.redirect_stdout(output):
exec("""print("Hello, world!")""")
assert output.getvalue() == "Hello, world!\n" # don't forget the \n

You could wrap this up into a function, if you like. Then your example
would work (modulo the \n):

def capture_output(code):
"""Execute 'code' and return its stdout"""
output = io.StringIO()
with contextlib.redirect_stdout(output):
exec(code)
return output.getvalue()

assert capture_output("""print('hello word')""") == 'hello word\n'
# no error

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: manually build a unittest/doctest object.

2015-12-08 Thread Vincent Davis
On Tue, Dec 8, 2015 at 2:06 AM, Peter Otten <__pete...@web.de> wrote:

> But why would you want to do that?


Thanks Peter, I want to do that because I want to test jupyter notebooks.
​The notebook is in JSON and I can get the source and result out but it was
unclear to me how to stick this into a test. doctest seemed the simplest
but maybe there is a better way.

I also tried something like:
assert exec("""print('hello word')""") == 'hello word'


Vincent Davis
720-301-3003
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: manually build a unittest/doctest object.

2015-12-08 Thread Peter Otten
Vincent Davis wrote:

> If I have a string that is python code, for example
> mycode = "print('hello world')"
> myresult = "hello world"
> How can a "manually" build a unittest (doctest) and test I get myresult
> 
> I have attempted to build a doctest but that is not working.
> e = doctest.Example(source="print('hello world')/n", want="hello world\n")
> t = doctest.DocTestRunner()
> t.run(e)

There seems to be one intermediate step missing:

example --> doctest --> runner

>>> import doctest
>>> example = doctest.Example(
... "print('hello world')\n",
... want="hello world\n")
>>> test = doctest.DocTest([example], {}, None, None, None, None)
>>> runner = doctest.DocTestRunner(verbose=True)
>>> runner.run(test)
Trying:
print('hello world')
Expecting:
hello world
ok
TestResults(failed=0, attempted=1)

But why would you want to do that?

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: manually build a unittest/doctest object.

2015-12-07 Thread Steven D'Aprano
On Tuesday 08 December 2015 14:30, Vincent Davis wrote:

> If I have a string that is python code, for example
> mycode = "print('hello world')"
> myresult = "hello world"
> How can a "manually" build a unittest (doctest) and test I get myresult

Not easily. Effectively, you would have to re-invent the doctest module and 
re-engineer it to accept a completely different format.

But if you are willing to write your tests in doctest format, this might 
help you:



import doctest

def rundoctests(text, name='', globs=None, verbose=None,
   report=True, optionflags=0, extraglobs=None,
   raise_on_error=False,
   quiet=False,):
# Assemble the globals.
if globs is None:
globs = globals()
globs = globs.copy()
if extraglobs is not None:
globs.update(extraglobs)
if '__name__' not in globs:
globs['__name__'] = '__main__'
# Parse the text looking for doc tests.
parser = doctest.DocTestParser()
test = parser.get_doctest(text, globs, name, name, 0)
# Run the tests.
if raise_on_error:
runner = doctest.DebugRunner(
verbose=verbose, optionflags=optionflags)
else:
runner = doctest.DocTestRunner(
verbose=verbose, optionflags=optionflags)
if quiet:
runner.run(test, out=lambda s: None)
else:
runner.run(test)
if report:
runner.summarize()
# Return a (named, if possible) tuple (failed, attempted).
a, b = runner.failures, runner.tries
try:
TestResults = doctest.TestResults
except AttributeError:
return (a, b)
return TestResults(a, b)



Then call rundoctests(text) to run any doc tests in text. By default, if 
there are no errors, it prints nothing. If there are errors, it prints the 
failing tests. Either way, it returns a tuple

(number of failures, number of tests run)

Examples in use:

py> good_code = """
... >>> import math
... >>> print "Hello World!"
... Hello World!
... >>> math.sqrt(100)
... 10.0
... 
... """
py> rundoctests(good_code)
TestResults(failed=0, attempted=3)



py> bad_code = """
... >>> print 10
... 11
... """
py> rundoctests(bad_code)
**
File "", line 2, in 
Failed example:
print 10
Expected:
11
Got:
10
**
1 items had failures:
   1 of   1 in 
***Test Failed*** 1 failures.
TestResults(failed=1, attempted=1)




-- 
https://mail.python.org/mailman/listinfo/python-list


manually build a unittest/doctest object.

2015-12-07 Thread Vincent Davis
If I have a string that is python code, for example
mycode = "print('hello world')"
myresult = "hello world"
How can a "manually" build a unittest (doctest) and test I get myresult

I have attempted to build a doctest but that is not working.
e = doctest.Example(source="print('hello world')/n", want="hello world\n")
t = doctest.DocTestRunner()
t.run(e)

Thanks
Vincent Davis
-- 
https://mail.python.org/mailman/listinfo/python-list


Logging with unittest

2014-11-27 Thread Jared Windover
I've been working with unittest for a while and just started using logging, and 
my question is: is it possible to use logging to display information about the 
tests you're running, but still have it be compatible with the --buffer option 
so that you only see it if a test fails? It seems like it would be a useful 
(and more standard than print statements) way to include stuff like parameter 
values near where a test may fail. 

What I currently have happening is that the logging output is shown regardless 
of whether or not the -b (or --buffer) flag is present. Any thoughts?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Syntax error in python unittest script

2014-09-24 Thread Terry Reedy

On 9/24/2014 3:33 PM, Milson Munakami wrote:


I am learning to use unittest with python


[way too long example]


File "TestTest.py", line 44
 def cleanup(self, success):
   ^
SyntaxError: invalid syntax


A common recommendation is to find the *minimal* example that exhibits 
the problem.  (Start with commenting out everything after the code in 
the traceback, then at least half the code before it. Etc.) That means 
that removing or commenting out a single statememt and the problem 
disappears.  In compound statements, you may need to insert 'pass' to 
not create a new problem.


If you had done that, you would have reduced your code to something like

class testFirewall( unittest.TestCase ):
def tearDown(self):
if self.reportStatus_:
			self.log.info("=== Test %s completed normally (%d sec)", self.name_, 
duration

def cleanup(self, success):
sys.excepthook = sys.__excepthook__

At that point, replacing "self.log.info(..." with "pass" would have made 
it clear that the problem was with the replaced statement.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Syntax error in python unittest script

2014-09-24 Thread Mark Lawrence

On 24/09/2014 21:06, Milson Munakami wrote:

[snipped all the usual garbage]

Would you please access this list via 
https://mail.python.org/mailman/listinfo/python-list or read and action 
this https://wiki.python.org/moin/GoogleGroupsPython to prevent us 
seeing double line spacing and single line paragraphs, thanks.


--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


Re: Syntax error in python unittest script

2014-09-24 Thread Milson Munakami
On Wednesday, September 24, 2014 1:33:35 PM UTC-6, Milson Munakami wrote:
> Hi,
> 
> 
> 
> I am learning to use unittest with python and walkthrough with this example
> 
> http://agiletesting.blogspot.com/2005/01/python-unit-testing-part-1-unittest.html
> 
> 
> Thanks
> so my test script is like this:
> 
> import json
> 
> import urllib
> 
> #import time
> 
> #from util import *
> 
> import httplib
> 
> #import sys
> 
> #from scapy.all import *
> 
> import unittest
> 
> 
> 
> import os, sys, socket, struct, select, time 
> 
> from threading import Thread
> 
> 
> 
> import logging
> 
> import traceback
> 
> 
> 
> 
> 
> 
> 
> class testFirewall( unittest.TestCase ):
> 
>   def setUp(self):
> 
>   """
> 
> 
> 
>   set up data used in the tests.
> 
> 
> 
>   setUp is called before each test function execution.
> 
> 
> 
>   """
> 
> 
> 
>   self.controllerIp="127.0.0.1"
> 
>   self.switches = ["00:00:00:00:00:00:00:01"]
> 
>   self.startTime_ = time.time()
> 
>   self.failed = False
> 
>   self.reportStatus_ = True
> 
>   self.name_ = "Firewall"
> 
>   self.log = logging.getLogger("unittest")
> 
> 
> 
>   def tearDown(self):
> 
>   if self.failed:
> 
>   return
> 
>   duration = time.time() - self.startTime_
> 
>   self.cleanup(True)
> 
>   if self.reportStatus_:
> 
>   self.log.info("=== Test %s completed normally (%d 
> sec)", self.name_, duration
> 
> 
> 
>   def cleanup(self, success):
> 
>   sys.excepthook = sys.__excepthook__
> 
>   try:
> 
>   return
> 
>   except NameError:
> 
>   self.log.error("Exception hit during cleanup, 
> bypassing:\n%s\n\n" % traceback.format_exc())
> 
>   pass
> 
>   else:
> 
> 
> 
>   fail("Expected a NameError")
> 
>   
> 
> 
> 
>   def testStatusFirewall(self):
> 
>   command = "http://%s:8080/wm/firewall/module/status/json"; % 
> self.controllerIp
> 
>   x = urllib.urlopen(command).read()
> 
>   parsedResult = json.loads(x)
> 
>   return parsedResult['result']
> 
> 
> 
> 
> 
>   def suite():
> 
> 
> 
>   suite = unittest.TestSuite()
> 
> 
> 
>   suite.addTest(unittest.makeSuite(testFirewall))
> 
> 
> 
>   return suite
> 
> 
> 
> if __name__ == '__main__':
> 
>   logging.basicConfig(filename='/tmp/testfirewall.log', 
> level=logging.DEBUG, 
> 
> format='%(asctime)s %(levelname)s %(name)s %(message)s')
> 
>   logger=logging.getLogger(__name__)  
> 
> 
> 
>   suiteFew = unittest.TestSuite()
> 
> 
> 
>   suiteFew.addTest(testFirewall("testStatusFirewall"))
> 
> 
> 
>   unittest.TextTestRunner(verbosity=2).run(suiteFew)
> 
> 
> 
>   #unittest.main()
> 
> 
> 
>   #unittest.TextTestRunner(verbosity=2).run(suite())
> 
> 
> 
> 
> 
> while running it in console using python .py
> 
> 
> 
> It gives me errror   
> 
> 
> 
> File "TestTest.py", line 44
> 
> def cleanup(self, success):
> 
>   ^
> 
> SyntaxError: invalid syntax
> 
> 
> 
> I guess it is due to time module but as you can see I already had import time.
> 
> 
> 
> what can be the reason if I comment those line containing the time it works.
> 
> 
> 
> But i need to keep track of duration 
> 
> 
> 
> Please help and suggest.
> 
> 
> 
> Thanks,
> 
> Milson

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Syntax error in python unittest script

2014-09-24 Thread Peter Otten
Milson Munakami wrote:

> if self.reportStatus_:
> self.log.info("=== Test %s completed normally (%d 
> sec)", self.name_, duration

The info() call is missing the closing parenthesis
 
> def cleanup(self, success):
> sys.excepthook = sys.__excepthook__
> 

> It gives me errror   
> 
> File "TestTest.py", line 44
> def cleanup(self, success):
>   ^
> SyntaxError: invalid syntax
 
To find a syntax error it is often a good idea to look into the code that 
precedes the line that the compiler complains about.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Syntax error in python unittest script

2014-09-24 Thread Emile van Sebille

On 9/24/2014 12:33 PM, Milson Munakami wrote:

def tearDown(self):
if self.failed:
return
duration = time.time() - self.startTime_
self.cleanup(True)
if self.reportStatus_:
self.log.info("=== Test %s completed normally (%d 
sec)", self.name_, duration



The method above doesn't end cleanly (you need to add a close paren to 
the last line at least)


Emile



def cleanup(self, success):



--
https://mail.python.org/mailman/listinfo/python-list


Syntax error in python unittest script

2014-09-24 Thread Milson Munakami
Hi,

I am learning to use unittest with python and walkthrough with this example
http://agiletesting.blogspot.com/2005/01/python-unit-testing-part-1-unittest.html

so my test script is like this:
import json
import urllib
#import time
#from util import *
import httplib
#import sys
#from scapy.all import *
import unittest

import os, sys, socket, struct, select, time 
from threading import Thread

import logging
import traceback



class testFirewall( unittest.TestCase ):
def setUp(self):
"""

set up data used in the tests.

setUp is called before each test function execution.

"""

self.controllerIp="127.0.0.1"
self.switches = ["00:00:00:00:00:00:00:01"]
self.startTime_ = time.time()
self.failed = False
self.reportStatus_ = True
self.name_ = "Firewall"
self.log = logging.getLogger("unittest")

def tearDown(self):
if self.failed:
return
duration = time.time() - self.startTime_
self.cleanup(True)
if self.reportStatus_:
self.log.info("=== Test %s completed normally (%d 
sec)", self.name_, duration

def cleanup(self, success):
sys.excepthook = sys.__excepthook__
try:
return
except NameError:
self.log.error("Exception hit during cleanup, 
bypassing:\n%s\n\n" % traceback.format_exc())
pass
else:

fail("Expected a NameError")


def testStatusFirewall(self):
command = "http://%s:8080/wm/firewall/module/status/json"; % 
self.controllerIp
x = urllib.urlopen(command).read()
parsedResult = json.loads(x)
return parsedResult['result']


def suite():

suite = unittest.TestSuite()

suite.addTest(unittest.makeSuite(testFirewall))

return suite

if __name__ == '__main__':
logging.basicConfig(filename='/tmp/testfirewall.log', 
level=logging.DEBUG, 
format='%(asctime)s %(levelname)s %(name)s %(message)s')
logger=logging.getLogger(__name__)  

suiteFew = unittest.TestSuite()

suiteFew.addTest(testFirewall("testStatusFirewall"))

unittest.TextTestRunner(verbosity=2).run(suiteFew)

#unittest.main()

#unittest.TextTestRunner(verbosity=2).run(suite())


while running it in console using python .py

It gives me errror   

File "TestTest.py", line 44
def cleanup(self, success):
  ^
SyntaxError: invalid syntax

I guess it is due to time module but as you can see I already had import time.

what can be the reason if I comment those line containing the time it works.

But i need to keep track of duration 

Please help and suggest.

Thanks,
Milson

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-04-30 Thread Ethan Furman

On 03/11/2014 01:58 PM, Ethan Furman wrote:


So I finally got enough data and enough of an understanding to write some unit 
tests for my code.



The weird behavior I'm getting:

   - when a test fails, I get the E or F, but no summary at the end
 (if the failure occurs in setUpClass before my tested routines
 are actually called, I get the summary; if I run a test method
 individually I get the summary)

   - I have two classes, but only one is being exercised

   - occasionally, one of my gvim windows is unceremoniously killed
(survived only by its swap file)

I'm running the tests under sudo as the routines expect to be run that way.

Anybody have any ideas?


For posterity's sake:

I added a .close() method to the class being tested which destroys its big data structures; then I added a tearDownClass 
method to the unittest.  That seems to have done the trick with getting the tests to /all/ run, and by apps don't 
suddenly disappear.  :)


--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Terry Reedy

On 3/12/2014 11:32 AM, Ethan Furman wrote:


I strongly suspect it's memory.  When I originally wrote the code I
tried to include six months worth of EoM data, but had to back it down
to three as my process kept mysteriously dying at four or more months.
There must be waaay too much stuff being kept alive by the stack
traces of the failed tests.


There is an issue or two about unittest not releasing memory. Also, 
modules are not cleared from sys.modules, so anything accessible from 
global scope is kept around.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Ethan Furman

On 03/12/2014 04:38 PM, Steven D'Aprano wrote:


[snip lots of good advice for unit testing]


I was just removing the Personally Identifiable Information.  Each test is pulling a payment from a batch of payments, 
so the first couple asserts are simply making sure I have the payment I think I have, then I run the routine that is 
supposed to match that payment with a bunch of invoices, and then I test to make sure I got back the invoices that I 
have manually verified are the correct ones to be returned.


There are many different tests because there are many different paths through the code, depending on exactly which 
combination of insanities the bank, the customer, and the company choose to inflict at that moment.  ;)


--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Ethan Furman

On 03/12/2014 04:47 PM, Steven D'Aprano wrote:


top -Mm -d 0.5


Cool, thanks!

--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Steven D'Aprano
On Wed, 12 Mar 2014 08:32:29 -0700, Ethan Furman wrote:

>> Some systems have an oom (Out Of Memory) process killer, which nukes
>> (semi-random) process when the system exhausts memory.  Is it possible
>> this is happening?  If so, you should see some log message in one of
>> your system logs.
> 
> That would explain why my editor windows were being killed.


Try opening a second console tab and running top in it. It will show the 
amount of memory being used. Then run the tests in the first, jump back 
to top, and watch to see if memory use goes through the roof:

top -Mm -d 0.5

will sort by memory use, display memory in more sensible human-readable 
units instead of bytes, and update the display every 0.5 second. You can 
then hit the "i" key to toggle display of idle processes and only show 
those that are actually doing something (which presumably will include 
Python running the tests).

This at least will allow you to see whether or not memory is the concern.




-- 
Steven D'Aprano
http://import-that.dreamwidth.org/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Steven D'Aprano
On Tue, 11 Mar 2014 13:58:17 -0700, Ethan Furman wrote:

> class Test_wfbrp_20140225(TestCase):
> 
>  @classmethod
>  def setUpClass(cls):
>  cls.pp = wfbrp.PaymentProcessor(
>  '.../lockbox_file',
>  '.../aging_file',
>  [
>  Path('month_end_1'),
>  Path('month_end_2'),
>  Path('month_end_3'),
>  ],
>  )

This has nothing to do with your actual problem, which appears to be the 
Linux(?) OOM killer reaping your applications, just some general 
observations on your test.


>  def test_xxx_1(self):

Having trouble thinking up descriptive names for the test? That's a sign 
that the test might be doing too much. Each test should check one self-
contained thing. That may or may not be a single call to a unittest 
assert* method, but it should be something you can describe in a few 
words:

"it's a regression test for bug 23"
"test that the database isn't on fire"
"invoices should have a valid debtor"
"the foo report ought to report all the foos"
"...and nothing but the foos."

This hints -- its just a hint, mind you, since I lack all specific 
knowledge of your application -- that the following "affirm" tests should 
be pulled out into separate tests.

>  p = self.pp.lockbox_payments[0]
>  # affirm we have what we're expecting 
>  self.assertEqual(
>  (p.payer, p.ck_num, p.credit),
>  ('a customer', '010101', 1),
>  )
>  self.assertEqual(p.invoices.keys(), ['XXX'])
>  self.assertEqual(p.invoices.values()[0].amount, 1)

which would then leave this to be the Actual Thing Being Tested for this 
test, which then becomes test_no_missing_invoices rather than test_xxx_1.

>  # now make sure we get back what we're expecting 
>  np, b = self.pp._match_invoices(p)
>  missing = []
>  for inv_num in ('123456', '789012', '345678'):
>  if inv_num not in b:
>  missing.append(inv_num)
>  if missing:
>  raise ValueError('invoices %r missing from batch' %
>  missing)

Raising an exception directly inside the test function should only occur 
if the test function is buggy. As Terry has already suggested, this 
probably communicates your intention much better:

self.assertEqual(missing, [])



-- 
Steven D'Aprano
http://import-that.dreamwidth.org/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Steven D'Aprano
On Wed, 12 Mar 2014 08:32:29 -0700, Ethan Furman wrote:

> There must
> be waaay too much stuff being kept alive by the stack traces of the
> failed tests.


I believe that unittest does keep stack traces alive until the process 
ends. I thought that there was a recent bug report for it, but the only 
one I can find was apparently fixed more than a decade ago:

http://bugs.python.org/issue451309





-- 
Steven D'Aprano
http://import-that.dreamwidth.org/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Roy Smith
In article ,
 Ethan Furman  wrote:

> > Alternatively, maybe something inside your process is just calling
> > sys.exit(), or even os._exit().  You'll see the exit() system call in
> > the strace output.
> 
> My bare try/except would have caught that.

A bare except would catch sys.exit(), but not os._exit().  Well, no 
that's not actually true.  Calling os._exit() will raise:

TypeError: _exit() takes exactly 1 argument (0 given)

but it won't catch os._exit(0) :-)

> > what happens if you reduce that to:
> >
> >   def test_xxx_1(self):
> >self.fail()
> 
> I only get the strange behavior if more than two (or maybe three) of my test 
> cases fail.  Less than that magic number, 
> and everything works just fine.  It doesn't matter which two or three, 
> either.

OK, well, assuming this is a memory problem, what if you do:

  def test_xxx_1(self):
   l = []
   while True:
   l.append(0)

That should eventually run out of memory.  Does that get you the same 
behavior in a single test case?  If so, that at least would be evidence 
supporting the memory exhaustion theory.

> I strongly suspect it's memory.  When I originally wrote the code I tried to 
> include six months worth of EoM data, but 
> had to back it down to three as my process kept mysteriously dying at four or 
> more months.  There must be waaay too 
> much stuff being kept alive by the stack traces of the failed tests.

One thing you might try is running your tests under nose 
(http://nose.readthedocs.org/).  Nose knows how to run unittest tests, 
and one of the gazillion options it has is to run each test case in an 
isolated process:

  --process-restartworker
If set, will restart each worker process once their
tests are done, this helps control memory leaks from
killing the system. [NOSE_PROCESS_RESTARTWORKER]

that might be what you need.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Ethan Furman

On 03/12/2014 06:44 AM, Roy Smith wrote:

In article ,
  Ethan Furman  wrote:


I've tried it both ways, and both ways my process is being killed, presumably
by the O/S.


What evidence do you have the OS is killing the process?


I put a bare try/except around the call to unittest.main, with a print 
statement in the except, and nothing ever prints.



Some systems have an oom (Out Of Memory) process killer, which nukes
(semi-random) process when the system exhausts memory.  Is it possible
this is happening?  If so, you should see some log message in one of
your system logs.


That would explain why my editor windows were being killed.



You didn't mention (or maybe I misssed it) which OS you're using.


Ubuntu 13 something or other.


I'm
assuming you've got some kind of system call tracer (strace, truss,
dtrace, etc).


Sadly, I have no experience with those programs yet, and until now didn't even 
know they existed.


Try running your tests under that.  If something is
sending your process a kill signal, you'll see it:

[gazillions of lines elided]
write(1, ">>> ", 4>>> ) = 4
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
select(1, [0], NULL, NULL, NULL)= ? ERESTARTNOHAND (To be
restarted)
--- SIGTERM (Terminated) @ 0 (0) ---
+++ killed by SIGTERM +++

Alternatively, maybe something inside your process is just calling
sys.exit(), or even os._exit().  You'll see the exit() system call in
the strace output.


My bare try/except would have caught that.



And, of course, the standard suggestion to reduce this down to the
minimum test case.  You posted:

  def test_xxx_1(self):
  p = self.pp.lockbox_payments[0]
  # affirm we have what we're expecting
  self.assertEqual(
  (p.payer, p.ck_num, p.credit),
  ('a customer', '010101', 1),
  )
  self.assertEqual(p.invoices.keys(), ['XXX'])
  self.assertEqual(p.invoices.values()[0].amount, 1)
  # now make sure we get back what we're expecting
  np, b = self.pp._match_invoices(p)
  missing = []
  for inv_num in ('123456', '789012', '345678'):
  if inv_num not in b:
  missing.append(inv_num)
  if missing:
  raise ValueError('invoices %r missing from batch' % missing)

what happens if you reduce that to:

  def test_xxx_1(self):
   self.fail()


I only get the strange behavior if more than two (or maybe three) of my test cases fail.  Less than that magic number, 
and everything works just fine.  It doesn't matter which two or three, either.




do you still get this strange behavior?  What if you get rid of your
setUpClass()?  Keep hacking away at the test suite until you get down to
a single line of code which, if run, exhibits the behavior, and if
commented out, does not.  At that point, you'll have a clue what's
causing this.  If you're lucky :-)


I strongly suspect it's memory.  When I originally wrote the code I tried to include six months worth of EoM data, but 
had to back it down to three as my process kept mysteriously dying at four or more months.  There must be waaay too 
much stuff being kept alive by the stack traces of the failed tests.


Thanks for your help!

--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Roy Smith
In article ,
 Ethan Furman  wrote:

> I've tried it both ways, and both ways my process is being killed, presumably 
> by the O/S.

What evidence do you have the OS is killing the process?

Some systems have an oom (Out Of Memory) process killer, which nukes 
(semi-random) process when the system exhausts memory.  Is it possible 
this is happening?  If so, you should see some log message in one of 
your system logs.

You didn't mention (or maybe I misssed it) which OS you're using.  I'm 
assuming you've got some kind of system call tracer (strace, truss, 
dtrace, etc).  Try running your tests under that.  If something is 
sending your process a kill signal, you'll see it:

[gazillions of lines elided]
write(1, ">>> ", 4>>> ) = 4
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
select(1, [0], NULL, NULL, NULL)= ? ERESTARTNOHAND (To be 
restarted)
--- SIGTERM (Terminated) @ 0 (0) ---
+++ killed by SIGTERM +++

Alternatively, maybe something inside your process is just calling 
sys.exit(), or even os._exit().  You'll see the exit() system call in 
the strace output.

And, of course, the standard suggestion to reduce this down to the 
minimum test case.  You posted:

 def test_xxx_1(self):
 p = self.pp.lockbox_payments[0]
 # affirm we have what we're expecting
 self.assertEqual(
 (p.payer, p.ck_num, p.credit),
 ('a customer', '010101', 1),
 )
 self.assertEqual(p.invoices.keys(), ['XXX'])
 self.assertEqual(p.invoices.values()[0].amount, 1)
 # now make sure we get back what we're expecting
 np, b = self.pp._match_invoices(p)
 missing = []
 for inv_num in ('123456', '789012', '345678'):
 if inv_num not in b:
 missing.append(inv_num)
 if missing:
 raise ValueError('invoices %r missing from batch' % missing)

what happens if you reduce that to:

 def test_xxx_1(self):
  self.fail()

do you still get this strange behavior?  What if you get rid of your 
setUpClass()?  Keep hacking away at the test suite until you get down to 
a single line of code which, if run, exhibits the behavior, and if 
commented out, does not.  At that point, you'll have a clue what's 
causing this.  If you're lucky :-)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-12 Thread Ethan Furman

On 03/11/2014 08:36 PM, Terry Reedy wrote:

On 3/11/2014 6:13 PM, John Gordon wrote:

In  Ethan Furman 
 writes:


  if missing:
  raise ValueError('invoices %r missing from batch' % missing)


It's been a while since I wrote test cases, but I recall using the assert*
methods (assertEqual, assertTrue, etc.) instead of raising exceptions.
Perhaps that's the issue?


Yes. I believe the methods all raise AssertionError on failure, and the test 
methods are wrapped with try:.. except
AssertionError as err:

if missing:
  raise ValueError('invoices %r missing from batch' % missing)

should be "assertEqual(missing, [], 'invoices missing from batch')" and if that 
fails, the non-empty list is printed
along with the message.


I've tried it both ways, and both ways my process is being killed, presumably 
by the O/S.

I will say it's an extra motivating factor to have few failing tests -- if more than two of my tests fail, all I see are 
'.'s, 'E's, and 'F's, with no clues as to which test failed nor why.  Thank goodness for '-v' and being able to specify 
which method of which class to run!


--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-11 Thread Terry Reedy

On 3/11/2014 6:13 PM, John Gordon wrote:

In  Ethan Furman 
 writes:


  if missing:
  raise ValueError('invoices %r missing from batch' % missing)


It's been a while since I wrote test cases, but I recall using the assert*
methods (assertEqual, assertTrue, etc.) instead of raising exceptions.
Perhaps that's the issue?


Yes. I believe the methods all raise AssertionError on failure, and the 
test methods are wrapped with try:.. except AssertionError as err:


   if missing:
 raise ValueError('invoices %r missing from batch' % missing)

should be "assertEqual(missing, [], 'invoices missing from batch')" and 
if that fails, the non-empty list is printed along with the message.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-11 Thread Ethan Furman

On 03/11/2014 03:13 PM, John Gordon wrote:

Ethan Furman writes:


  if missing:
  raise ValueError('invoices %r missing from batch' % missing)


It's been a while since I wrote test cases, but I recall using the assert*
methods (assertEqual, assertTrue, etc.) instead of raising exceptions.
Perhaps that's the issue?


Drat.  Tried it, same issue.  O/S kills it.  :(

--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-11 Thread Ethan Furman

On 03/11/2014 01:58 PM, Ethan Furman wrote:


Anybody have any ideas?


I suspect the O/S is killing the process.  If I manually select the other class to run (which has all successful tests, 
so no traceback baggage), it runs normally.


--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list


Re: unittest weirdness

2014-03-11 Thread John Gordon
In  Ethan Furman 
 writes:

>  if missing:
>  raise ValueError('invoices %r missing from batch' % missing)

It's been a while since I wrote test cases, but I recall using the assert*
methods (assertEqual, assertTrue, etc.) instead of raising exceptions.
Perhaps that's the issue?

-- 
John Gordon Imagine what it must be like for a real medical doctor to
gor...@panix.comwatch 'House', or a real serial killer to watch 'Dexter'.

-- 
https://mail.python.org/mailman/listinfo/python-list


unittest weirdness

2014-03-11 Thread Ethan Furman

So I finally got enough data and enough of an understanding to write some unit 
tests for my code.

These aren't the first unit tests I've written, but the behavior I'm getting is 
baffling.

I'm using 2.7.4 and I'm testing some routines which attempt to interpret data from a flat file and create a new flat 
file for later programmatic consumption.


The weird behavior I'm getting:

  - when a test fails, I get the E or F, but no summary at the end
(if the failure occurs in setUpClass before my tested routines
are actually called, I get the summary; if I run a test method
individually I get the summary)

  - I have two classes, but only one is being exercised

  - occasionally, one of my gvim windows is unceremoniously killed
   (survived only by its swap file)

I'm running the tests under sudo as the routines expect to be run that way.

Anybody have any ideas?

--
~Ethan~

--snippet of code--

from VSS.path import Path
from unittest import TestCase, main as Run
import wfbrp

class Test_wfbrp_20140225(TestCase):

@classmethod
def setUpClass(cls):
cls.pp = wfbrp.PaymentProcessor(
'.../lockbox_file',
'.../aging_file',
[
Path('month_end_1'),
Path('month_end_2'),
Path('month_end_3'),
],
)

def test_xxx_1(self):
p = self.pp.lockbox_payments[0]
# affirm we have what we're expecting
self.assertEqual(
(p.payer, p.ck_num, p.credit),
('a customer', '010101', 1),
)
self.assertEqual(p.invoices.keys(), ['XXX'])
self.assertEqual(p.invoices.values()[0].amount, 1)
# now make sure we get back what we're expecting
np, b = self.pp._match_invoices(p)
missing = []
for inv_num in ('123456', '789012', '345678'):
if inv_num not in b:
missing.append(inv_num)
if missing:
raise ValueError('invoices %r missing from batch' % missing)
--
https://mail.python.org/mailman/listinfo/python-list


Re: parametized unittest

2014-01-12 Thread CraftyTech
On Saturday, January 11, 2014 11:34:30 PM UTC-5, Roy Smith wrote:
> In article ,
> 
>  "W. Trevor King"  wrote:
> 
> 
> 
> > On Sat, Jan 11, 2014 at 08:00:05PM -0800, CraftyTech wrote:
> 
> > > I'm finding it hard to use unittest in a for loop.  Perhaps something 
> > > like:
> 
> > > 
> 
> > > for val in range(25):
> 
> > >   self.assertEqual(val,5,"not equal)
> 
> > > 
> 
> > > The loop will break after the first failure.  Anyone have a good
> 
> > > approach for this?  please advise.
> 
> > 
> 
> > If Python 3.4 is an option, you can stick to the standard library and
> 
> > use subtests [1].
> 
> 
> 
> Or, as yet another alternative, if you use nose, you can write test 
> 
> generators.
> 
> 
> 
> https://nose.readthedocs.org/en/latest/writing_tests.html#test-generators

Thank you all for the feedback.  I now have what I need.  Cheers 

On Saturday, January 11, 2014 11:34:30 PM UTC-5, Roy Smith wrote:
> In article ,
> 
>  "W. Trevor King"  wrote:
> 
> 
> 
> > On Sat, Jan 11, 2014 at 08:00:05PM -0800, CraftyTech wrote:
> 
> > > I'm finding it hard to use unittest in a for loop.  Perhaps something 
> > > like:
> 
> > > 
> 
> > > for val in range(25):
> 
> > >   self.assertEqual(val,5,"not equal)
> 
> > > 
> 
> > > The loop will break after the first failure.  Anyone have a good
> 
> > > approach for this?  please advise.
> 
> > 
> 
> > If Python 3.4 is an option, you can stick to the standard library and
> 
> > use subtests [1].
> 
> 
> 
> Or, as yet another alternative, if you use nose, you can write test 
> 
> generators.
> 
> 
> 
> https://nose.readthedocs.org/en/latest/writing_tests.html#test-generators

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: parametized unittest

2014-01-11 Thread Roy Smith
In article ,
 "W. Trevor King"  wrote:

> On Sat, Jan 11, 2014 at 08:00:05PM -0800, CraftyTech wrote:
> > I'm finding it hard to use unittest in a for loop.  Perhaps something like:
> > 
> > for val in range(25):
> >   self.assertEqual(val,5,"not equal)
> > 
> > The loop will break after the first failure.  Anyone have a good
> > approach for this?  please advise.
> 
> If Python 3.4 is an option, you can stick to the standard library and
> use subtests [1].

Or, as yet another alternative, if you use nose, you can write test 
generators.

https://nose.readthedocs.org/en/latest/writing_tests.html#test-generators
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: parametized unittest

2014-01-11 Thread W. Trevor King
On Sat, Jan 11, 2014 at 08:00:05PM -0800, CraftyTech wrote:
> I'm finding it hard to use unittest in a for loop.  Perhaps something like:
> 
> for val in range(25):
>   self.assertEqual(val,5,"not equal)
> 
> The loop will break after the first failure.  Anyone have a good
> approach for this?  please advise.

If Python 3.4 is an option, you can stick to the standard library and
use subtests [1].

Cheers,
Trevor

[1]: 
http://docs.python.org/3.4/library/unittest.html#distinguishing-test-iterations-using-subtests

-- 
This email may be signed or encrypted with GnuPG (http://www.gnupg.org).
For more information, see http://en.wikipedia.org/wiki/Pretty_Good_Privacy


signature.asc
Description: OpenPGP digital signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: parametized unittest

2014-01-11 Thread Ben Finney
CraftyTech  writes:

> I'm trying parametize my unittest so that I can re-use over and over,
> perhaps in a for loop.

The ‘testscenarios’ https://pypi.python.org/pypi/testscenarios>
library allows you to define a set of data scenarios on your
FooBarTestCase and have all the test case functions in that class run
for all the scenarios, as distinct test cases.

e.g. a class with 5 scenarios defined, and 3 test case functions, will
run as 15 distinct test cases with separate output in the report.

> The loop will break after the first failure. Anyone have a good
> approach for this? please advise.

Yes, this is exactly the problem addressed by ‘testscenarios’. Enjoy it!

-- 
 \“Program testing can be a very effective way to show the |
  `\presence of bugs, but is hopelessly inadequate for showing |
_o__)  their absence.” —Edsger W. Dijkstra |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


parametized unittest

2014-01-11 Thread CraftyTech
hello all,

 I'm trying parametize my unittest so that I can re-use over and over, 
perhaps in a for loop.  Consider the following:

'''
import unittest

class TestCalc(unittest.TestCase):
def testAdd(self):
self.assertEqual(7, 7, "Didn't add up")

if __name__=="__main__":
unittest.main()

'''


Simple and straight forward, but I'm trying to get to a place where I can run 
it this way:

import unittest

class TestCalc(unittest.TestCase):
def testAdd(self,var):
self.assertEqual(var, 7, "Didn't add up")

if __name__=="__main__":
unittest.main(testAdd(7))

is this possible?  I'm finding it hard to use unittest in a for loop.  Perhaps 
something like:

for val in range(25):
  self.assertEqual(val,5,"not equal)

The loop will break after the first failure.  Anyone have a good approach for 
this?  please advise.

cheers,
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Struggling with unittest discovery - how to structure my project test suite

2013-12-20 Thread Paul Moore
On Friday, 20 December 2013 17:41:40 UTC, Serhiy Storchaka  wrote:
> 20.12.13 16:47, Paul Moore написав(ла):
>
> > 1. I can run all the tests easily on demand.
> > 2. I can run just the functional or unit tests when needed.
> 
> python -m unittest discover -s tests/functional
> python -m unittest discover tests/functional

Hmm, I could have sworn I'd tried that. But you're absolutely right.

Thanks, and sorry for the waste of bandwidth...
Paul
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Struggling with unittest discovery - how to structure my project test suite

2013-12-20 Thread Terry Reedy

On 12/20/2013 12:41 PM, Serhiy Storchaka wrote:

20.12.13 16:47, Paul Moore написав(ла):

What's the best way of structuring my projects so that:


It depends on your tradeoff between extra setup in the files and how 
much you type each time you run tests.



1. I can run all the tests easily on demand.


I believe that if you copy Lib/idlelib/idle_test/__init__.py to 
tests/__main__.py and add

  import unittest; unittest.main()
then
  python -m tests
would run all your tests. Lib/idlelib/idle_test/README.py may help explain.


2. I can run just the functional or unit tests when needed.


python -m unittest discover -s tests/functional
python -m unittest discover tests/functional


Ditto for __main__.py files in each, so
  python -m tests.unit (functional)
will work.


3. I can run individual tests (or maybe just individual test modules,
I don't have so many tests yet that I know how detailed I'll need to
get!) without too much messing (and certainly without changing any
source files!)


python -m unittest discover -s tests/functional -p test_spam.py
python -m unittest discover tests/functional -p test_spam.py
python -m unittest discover tests/functional test_spam.py


'discover' is not needed for single files.  For instance,
  python -m unittest idlelib.idle_test.test_calltips
works for me. One can extend that to test cases and methods.
python -m unittest idlelib.idle_test.test_calltips.Get_entityTest
and
python -m unittest 
idlelib.idle_test.test_calltips.Get_entityTest.test_bad_entity


If you add to each test_xyz.py file
  if __name__ == '__main__':
unittest.main(verbosity=2)  # example of adding fixed option
then
  python -m tests.unit.test_xyz
will run the tests in that file. (So does F5 in an Idle editor, which is 
how I run individual test files while editing. I copy the  boilerplate 
from README.txt or an existing test_xyz.py file.)


--
Terry Jan Reedy


--
https://mail.python.org/mailman/listinfo/python-list


Re: Struggling with unittest discovery - how to structure my project test suite

2013-12-20 Thread Serhiy Storchaka

20.12.13 16:47, Paul Moore написав(ла):

What's the best way of structuring my projects so that:

1. I can run all the tests easily on demand.
2. I can run just the functional or unit tests when needed.


python -m unittest discover -s tests/functional
python -m unittest discover tests/functional


3. I can run individual tests (or maybe just individual test modules, I don't 
have so many tests yet that I know how detailed I'll need to get!) without too 
much messing (and certainly without changing any source files!)


python -m unittest discover -s tests/functional -p test_spam.py
python -m unittest discover tests/functional -p test_spam.py
python -m unittest discover tests/functional test_spam.py


--
https://mail.python.org/mailman/listinfo/python-list


Struggling with unittest discovery - how to structure my project test suite

2013-12-20 Thread Paul Moore
I'm trying to write a project using test-first development. I've been basically 
following the process from "Test-Driven Web Development with Python" (excellent 
book, by the way) but I'm writing a command line application rather than a web 
app, so I'm having to modify some bits as I go along. Notably I am *not* using 
the django test runner.

I have my functional tests in a file in my project root at the moment - 
functional_tests.py. But now I need to start writing some unit tests and I need 
to refactor my tests into a more manageable structure.

If I create a "tests" directory with an __init__.py and "unit" and "functional" 
subdirectories, each with __init__.py and test_XXX.py files in them, then 
"python -m unittest" works (as in, it discovers my tests fine). But if I just 
want to run my unit tests, or just my functional tests, I can't seem to get the 
command line right to do that - I either get all the tests run, or none.

What's the best way of structuring my projects so that:

1. I can run all the tests easily on demand.
2. I can run just the functional or unit tests when needed.
3. I can run individual tests (or maybe just individual test modules, I don't 
have so many tests yet that I know how detailed I'll need to get!) without too 
much messing (and certainly without changing any source files!)

I know that tools like py.test or nose can probably do this sort of thing. But 
I don't really want to add a new testing tool to the list of things I have to 
learn for this project (I'm already using it to learn SQLAlchemy and colander, 
as well as test-driven development, so I have enough on my plate already!)

I've looked around on the web for information - there's a lot available on 
writing the tests themselves, but surprisingly little on how to structure a 
project for easy testing (unless I've just failed miserably to find the right 
search terms :-))

Thanks for any help,
Paul
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to make a unittest decorator to rename a method from "x" to "testx?"

2013-08-11 Thread adam . preble
On Friday, August 9, 2013 1:31:43 AM UTC-5, Peter Otten wrote:
> I see I have to fix it myself then...

Sorry man, I think in my excitement of seeing the first of your examples to 
work, that I missed the second example, only seeing your comments about it at 
the end of the post.  I didn't expect such a good response.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How many times does unittest run each test?

2013-08-10 Thread Chris Angelico
On Sun, Aug 11, 2013 at 1:52 AM, Josh English
 wrote:
> I'm using logging for debugging, because it is pretty straightforward and can 
> be activated for a small section of the module. My modules run long (3,000 
> lines or so) and finding all those dastardly print statements is a pain, and 
> littering my code with "if debug: print message" clauses. Logging just makes 
> it simple.


So logging might be the right tool for your job. Tip: Sometimes it
helps, when trying to pin down an issue, to use an additional
debugging aid. You're already using logging? Add print calls. Already
got a heartbeat function? Run it through a single-stepping debugger as
well. Usually that sort of thing just gives you multiple probes at the
actual problem, but occasionally you'll get an issue like this, and
suddenly it's really obvious because one probe behaves completely
differently from the other.

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How many times does unittest run each test?

2013-08-10 Thread Josh English
On Saturday, August 10, 2013 4:21:35 PM UTC-7, Chris Angelico wrote:
> On Sun, Aug 11, 2013 at 12:14 AM, Roy Smith <> wrote:
> 
> > Maybe you've got two different handlers which are both getting the same
> > loggingvents and somehow they both end up in your stderr stream.
> > Likely?  Maybe not, but if you don't have any logging code in the test
> > at all, it becomes impossible.  You can't have a bug in a line of code
> > that doesn't exist (yeah, I know, that's a bit of a handwave).
> 
> Likely? Very much so, to the extent that it is, if not a FAQ,
> certainly a Not All That Uncommonly Asked Question. So many times
> someone places logging code in something that gets called twice, and
> ends up with two handlers. Personally, I much prefer to debug with
> straight-up 'print' - much less hassle. I'd turn to the logging module
> only if I actually need its functionality (logging to some place other
> than the console, or leaving the log statements in and {en|dis}abling
> them at run-time).

Yes, I definitely need the NUATAQ sheet for Python.

I'm using logging for debugging, because it is pretty straightforward and can 
be activated for a small section of the module. My modules run long (3,000 
lines or so) and finding all those dastardly print statements is a pain, and 
littering my code with "if debug: print message" clauses. Logging just makes it 
simple.

Josh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How many times does unittest run each test?

2013-08-10 Thread Josh English
On Saturday, August 10, 2013 4:14:09 PM UTC-7, Roy Smith wrote:

> 
> 
> I don't understand the whole SimpleChecker class.  You've created a 
> class, and defined your own __call__(), just so you can check if a 
> string is in a list?  Couldn't this be done much simpler with a plain 
> old function:
> 

> def checker(thing):
> print "calling %s" % thing
> return thing in ['a','b','c']

SimpleCheck is an extremely watered down version of my XML checker 
(https://pypi.python.org/pypi/XMLCheck/0.6.7). I'm working a feature that 
allows the checker to call a function to get acceptable values, instead of 
defining them at the start of the program. I included all of that because it's 
the shape of the script I'm working with.

The real problem was setting additional handlers where they shouldn't be.

Josh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How many times does unittest run each test?

2013-08-10 Thread Ned Batchelder

On 8/10/13 4:40 PM, Roy Smith wrote:

In article ,
  Josh English  wrote:


I am working on a library, and adding one feature broke a seemingly unrelated
feature. As I already had Test Cases written, I decided to try to incorporate
the logging module into my class, and turn on debugging at the logger before
the newly-broken test.

Here is an example script:

[followed by 60 lines of code]

The first thing to do is get this down to some minimal amount of code
that demonstrates the problem.

For example, you drag in the logging module, and do some semi-complex
configuration.  Are you SURE your tests are getting run multiple times,
or maybe it's just that they're getting LOGGED multiple times.  Tear out
all the logging stuff.  Just use a plain print statement.
Roy is right: the problem isn't the tests, it's the logging.  You are 
calling .addHandler in the SimpleChecker.__init__, then you are 
constructing two SimpleCheckers, each of which adds a handler.  In the 
LoaderTC test, you've only constructed one, adding only one handler, so 
the "calling q" line only appears once.  Then the NameSpaceTC tests 
runs, constructs another SimplerChecker, which adds another handler, so 
now there are two.  That's why the "calling a" and "calling f" lines 
appear twice.


Move your logging configuration to a place that executes only once.

Also, btw, you don't need the "del self.checker" in your tearDown 
methods: the test object is destroyed after each test, so any objects it 
holds will be released after each test with no special action needed on 
your part.


--Ned.
--
http://mail.python.org/mailman/listinfo/python-list


Re: How many times does unittest run each test?

2013-08-10 Thread Chris Angelico
On Sun, Aug 11, 2013 at 12:14 AM, Roy Smith  wrote:
> Maybe you've got two different handlers which are both getting the same
> logging events and somehow they both end up in your stderr stream.
> Likely?  Maybe not, but if you don't have any logging code in the test
> at all, it becomes impossible.  You can't have a bug in a line of code
> that doesn't exist (yeah, I know, that's a bit of a handwave).

Likely? Very much so, to the extent that it is, if not a FAQ,
certainly a Not All That Uncommonly Asked Question. So many times
someone places logging code in something that gets called twice, and
ends up with two handlers. Personally, I much prefer to debug with
straight-up 'print' - much less hassle. I'd turn to the logging module
only if I actually need its functionality (logging to some place other
than the console, or leaving the log statements in and {en|dis}abling
them at run-time).

ChrisA
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How many times does unittest run each test?

2013-08-10 Thread Roy Smith
On Saturday, August 10, 2013 1:40:43 PM UTC-7, Roy Smith wrote:
> > For example, you drag in the logging module, and do some semi-complex 
> > configuration.  Are you SURE your tests are getting run multiple times, 
> > or maybe it's just that they're getting LOGGED multiple times.  Tear out 
> > all the logging stuff.  Just use a plain print statement.

In article <35d12db6-c367-4a45-a68e-8ed4c0ae1...@googlegroups.com>,
 Josh English  wrote:

> Ok, then why would things get logged multiple times?

Maybe you've got two different handlers which are both getting the same 
logging events and somehow they both end up in your stderr stream.  
Likely?  Maybe not, but if you don't have any logging code in the test 
at all, it becomes impossible.  You can't have a bug in a line of code 
that doesn't exist (yeah, I know, that's a bit of a handwave).

When a test (or any other code) is doing something you don't understand, 
the best way to start understanding it is to create a minimal test case; 
the absolute smallest amount of code that demonstrates the problem.

I don't understand the whole SimpleChecker class.  You've created a 
class, and defined your own __call__(), just so you can check if a 
string is in a list?  Couldn't this be done much simpler with a plain 
old function:

def checker(thing):
print "calling %s" % thing
return thing in ['a','b','c']
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How many times does unittest run each test?

2013-08-10 Thread Josh English
Aha. Thanks, Ned. This is the answer I was looking for.

I use logging in the real classes, and thought that turning setting
the level to logging.DEBUG once was easier than hunting down four
score of print statements.

Josh

On Sat, Aug 10, 2013 at 3:52 PM, Ned Batchelder  wrote:
> On 8/10/13 4:40 PM, Roy Smith wrote:
>>
>> In article ,
>>   Josh English  wrote:
>>
>>> I am working on a library, and adding one feature broke a seemingly
>>> unrelated
>>> feature. As I already had Test Cases written, I decided to try to
>>> incorporate
>>> the logging module into my class, and turn on debugging at the logger
>>> before
>>> the newly-broken test.
>>>
>>> Here is an example script:
>>
>> [followed by 60 lines of code]
>>
>> The first thing to do is get this down to some minimal amount of code
>> that demonstrates the problem.
>>
>> For example, you drag in the logging module, and do some semi-complex
>> configuration.  Are you SURE your tests are getting run multiple times,
>> or maybe it's just that they're getting LOGGED multiple times.  Tear out
>> all the logging stuff.  Just use a plain print statement.
>
> Roy is right: the problem isn't the tests, it's the logging.  You are
> calling .addHandler in the SimpleChecker.__init__, then you are constructing
> two SimpleCheckers, each of which adds a handler.  In the LoaderTC test,
> you've only constructed one, adding only one handler, so the "calling q"
> line only appears once.  Then the NameSpaceTC tests runs, constructs another
> SimplerChecker, which adds another handler, so now there are two.  That's
> why the "calling a" and "calling f" lines appear twice.
>
> Move your logging configuration to a place that executes only once.
>
> Also, btw, you don't need the "del self.checker" in your tearDown methods:
> the test object is destroyed after each test, so any objects it holds will
> be released after each test with no special action needed on your part.
>
> --Ned.



-- 
Josh English
joshua.r.engl...@gmail.com
http://www.joshuarenglish.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How many times does unittest run each test?

2013-08-10 Thread Josh English
On Saturday, August 10, 2013 1:40:43 PM UTC-7, Roy Smith wrote:
> In article ,
> 
>  Josh English  wrote:
> The first thing to do is get this down to some minimal amount of code 
> that demonstrates the problem.
> 
> 
> 
> For example, you drag in the logging module, and do some semi-complex 
> configuration.  Are you SURE your tests are getting run multiple times, 
> or maybe it's just that they're getting LOGGED multiple times.  Tear out 
> all the logging stuff.  Just use a plain print statement.
> 
> You've got two different TestCases here.  Does the problem happen with 
> just LoaderTC, or with just NameSpaceTC?
> 


Ok, then why would things get logged multiple times? The two test cases 
actually test a loader function that I could strip out, because it wasn't 
relevant. In these cases, the loader was being called with two different 
configurations in the individual setUp methods.

I left them there to show that in LoaderTC, there is one debug log coming from 
SimpleChecker, but in the NameSpaceTC, each debug message is printed twice. If 
I print statements on each test_ method, they are called once. 

As far as stripping down the code, I suppose 15 lines can be culled:

#-
import logging

class SimpleChecker(object):
def __init__(self,):
self.logger = logging.getLogger(self.__class__.__name__)
h = logging.StreamHandler()
f = logging.Formatter("%(name)s - %(levelname)s - %(message)s")
h.setFormatter(f)
self.logger.addHandler(h)

def __call__(self, thing):
self.logger.debug('calling %s' % thing)
return thing in ['a','b','c']

import unittest

class LoaderTC(unittest.TestCase):
def setUp(self):
self.checker = SimpleChecker()

def tearDown(self):
del self.checker

def test_callable(self):
self.checker.logger.setLevel(logging.DEBUG)
self.assertTrue(self.checker('q') is False, "checker accepted bad 
input")

class NameSpaceTC(unittest.TestCase):
def setUp(self):
self.checker = SimpleChecker()

def tearDown(self):
del self.checker

def test_callable(self):
print "testing callable"

self.checker.logger.setLevel(logging.DEBUG)
self.assertTrue(self.checker('a'), "checker did not accept good value")
self.assertFalse(self.checker('f'), "checker accepted bad value")

if __name__=='__main__':
unittest.main(verbosity=0)

#---
The output:

SimpleChecker - DEBUG - calling q
setting up NameSpace
testing callable
SimpleChecker - DEBUG - calling a
SimpleChecker - DEBUG - calling a
SimpleChecker - DEBUG - calling f
SimpleChecker - DEBUG - calling f
--
Ran 2 tests in 0.014s

OK
Exit code:  False

Josh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How many times does unittest run each test?

2013-08-10 Thread Roy Smith
In article ,
 Josh English  wrote:

> I am working on a library, and adding one feature broke a seemingly unrelated 
> feature. As I already had Test Cases written, I decided to try to incorporate 
> the logging module into my class, and turn on debugging at the logger before 
> the newly-broken test.
> 
> Here is an example script:
[followed by 60 lines of code]

The first thing to do is get this down to some minimal amount of code 
that demonstrates the problem.

For example, you drag in the logging module, and do some semi-complex 
configuration.  Are you SURE your tests are getting run multiple times, 
or maybe it's just that they're getting LOGGED multiple times.  Tear out 
all the logging stuff.  Just use a plain print statement.

You've got two different TestCases here.  Does the problem happen with 
just LoaderTC, or with just NameSpaceTC?

Keep tearing out code until you can no longer demonstrate the problem.  
Keep at it until there is not a single line of code remaining which 
isn't required to demonstrate.  Then come back and ask your question 
again.
-- 
http://mail.python.org/mailman/listinfo/python-list


How many times does unittest run each test?

2013-08-10 Thread Josh English
I am working on a library, and adding one feature broke a seemingly unrelated 
feature. As I already had Test Cases written, I decided to try to incorporate 
the logging module into my class, and turn on debugging at the logger before 
the newly-broken test.

Here is an example script:
# -
#!/usr/bin/env python

import logging

def get_vals():
return ['a','b','c']

class SimpleChecker(object):
def __init__(self, callback=None):
self.callback = callback
self.logger = logging.getLogger(self.__class__.__name__)
h = logging.StreamHandler()
f = logging.Formatter("%(name)s - %(levelname)s - %(message)s")
h.setFormatter(f)
self.logger.addHandler(h)

def __call__(self, thing):
self.logger.debug('calling %s' % thing)
vals = self.callback()
return thing in vals

import unittest

class LoaderTC(unittest.TestCase):
def setUp(self):
self.checker = SimpleChecker(get_vals)

def tearDown(self):
del self.checker

def test_callable(self):
self.assertTrue(callable(self.checker),
'loader did not create callable object')
self.assertTrue(callable(self.checker.callback),
'loader did not create callable callback')

self.checker.logger.setLevel(logging.DEBUG)
self.assertTrue(self.checker('q') is False, "checker accepted bad 
input")

class NameSpaceTC(unittest.TestCase):
def setUp(self):
self.checker = SimpleChecker(get_vals)

def tearDown(self):
del self.checker

def test_callable(self):
self.assertTrue(callable(self.checker),
'loader did not create callable object')
self.assertTrue(callable(self.checker.callback),
'loader did not create callable callback')

self.checker.logger.setLevel(logging.DEBUG)
self.assertTrue(self.checker('a'), "checker did not accept good value")
self.assertFalse(self.checker('f'), "checker accepted bad value")

if __name__=='__main__':
unittest.main(verbosity=0)

# ---

When I run this, I get:
SimpleChecker - DEBUG - calling q
SimpleChecker - DEBUG - calling a
SimpleChecker - DEBUG - calling a
SimpleChecker - DEBUG - calling f
SimpleChecker - DEBUG - calling f
--
Ran 2 tests in 0.013s

OK
Exit code:  False

Why am I seeing those extra debugging lines? In the script I'm really trying to 
debug, I see 12-13 debug messages repeated, making actual debugging difficult.

Josh English
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to make a unittest decorator to rename a method from "x" to "testx?"

2013-08-08 Thread Peter Otten
adam.pre...@gmail.com wrote:

> On Thursday, August 8, 2013 3:50:47 AM UTC-5, Peter Otten wrote:
>> Peter Otten wrote:
>> Oops, that's an odd class name. Fixing the name clash in Types.__new__()
>> is
>> 
>> left as an exercise...
> 
> Interesting, I got __main__.T, even though I pretty much just tried your
> code wholesale.  

I see I have to fix it myself then...

> For what it's worth, I'm using Python 2.7.  I'm glad to
> see that code since I learned a lot of tricks from it.

[My buggy code]

> class Type(type):
> def __new__(class_, name, bases, classdict):

Here 'name' is the class name

> newclassdict = {}
> for name, attr in classdict.items():
> if getattr(attr, "test", False):
> assert not name.startswith(PREFIX)
> name = PREFIX + name
> assert name not in newclassdict
> newclassdict[name] = attr

Here 'name' is the the last key of classdict which is passed to type.__new__ 
instead of the actual class name.

> return type.__new__(class_, name, bases, newclassdict)

[Fixed version]

class Type(type):
def __new__(class_, classname, bases, classdict):
newclassdict = {}
for name, attr in classdict.items():
if getattr(attr, "test", False):
assert not name.startswith(PREFIX)
name = PREFIX + name
assert name not in newclassdict
newclassdict[name] = attr
return type.__new__(class_, classname, bases, newclassdict)


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to make a unittest decorator to rename a method from "x" to "testx?"

2013-08-08 Thread adam . preble
On Thursday, August 8, 2013 3:50:47 AM UTC-5, Peter Otten wrote:
> Peter Otten wrote:
> Oops, that's an odd class name. Fixing the name clash in Types.__new__() is 
> 
> left as an exercise...

Interesting, I got __main__.T, even though I pretty much just tried your code 
wholesale.  For what it's worth, I'm using Python 2.7.  I'm glad to see that 
code since I learned a lot of tricks from it.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to make a unittest decorator to rename a method from "x" to "testx?"

2013-08-08 Thread Terry Reedy

On 8/8/2013 12:20 PM, adam.pre...@gmail.com wrote:

On Thursday, August 8, 2013 3:04:30 AM UTC-5, Terry Reedy wrote:



def test(f):

  f.__class__.__dict__['test_'+f.__name__]


Sorry, f.__class__ is 'function', not the enclosing class. A decorator 
for a method could not get the enclosing class name until 3.3, when it 
would be part of f.__qualname__.


Use one of the other suggestions.


Just for giggles I can mess around with those exact lines, but I did get 
spanked trying to do something similar.  I couldn't reference __class__ for 
some reason (Python 2.7 problem?).


In 2.x, old-style classes and instances thereof do not have .__class__. 
All other objects do, as far as I know.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to make a unittest decorator to rename a method from "x" to "testx?"

2013-08-08 Thread Ned Batchelder

On 8/8/13 12:17 PM, adam.pre...@gmail.com wrote:

On Thursday, August 8, 2013 3:50:47 AM UTC-5, Peter Otten wrote:

Peter Otten wrote:
Oops, that's an odd class name. Fixing the name clash in Types.__new__() is

left as an exercise...

I will do some experiments with a custom test loader since I wasn't aware of 
that as a viable alternative.  I am grateful for the responses.
If you can use another test runner, they often have more flexible and 
powerful ways to do everything.  nosetests will let you use a __test__ 
attribute, for example, to mark tests.  Your decorator could simply 
assign that attribute on the test methods.


You'd still write your tests using the unittest base classes, but run 
them with nose.


--Ned.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to make a unittest decorator to rename a method from "x" to "testx?"

2013-08-08 Thread adam . preble
On Thursday, August 8, 2013 3:04:30 AM UTC-5, Terry Reedy wrote:
> I cannot help but note that this is *more* typing. But anyhow, something 

It wasn't so much about the typing so much as having "test" in front of 
everything.  It's a problem particular to me since I'm writing code that, well, 
runs experiments.  So the word "test" is already all over the place.  I would 
even prefer if I could do away with assuming everything starting with "test" is 
a unittest, but I didn't think I could; it looks like Peter Otten got me in the 
right direction.

> like this might work.
> 

> def test(f):
> 
>  f.__class__.__dict__['test_'+f.__name__]
> 
> 
> 
> might work. Or maybe for the body just
> 
> setattr(f.__class__, 'test_'+f.__name__)
> 

Just for giggles I can mess around with those exact lines, but I did get 
spanked trying to do something similar.  I couldn't reference __class__ for 
some reason (Python 2.7 problem?).
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to make a unittest decorator to rename a method from "x" to "testx?"

2013-08-08 Thread adam . preble
On Thursday, August 8, 2013 3:50:47 AM UTC-5, Peter Otten wrote:
> Peter Otten wrote:
> Oops, that's an odd class name. Fixing the name clash in Types.__new__() is 
> 
> left as an exercise...

I will do some experiments with a custom test loader since I wasn't aware of 
that as a viable alternative.  I am grateful for the responses.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to make a unittest decorator to rename a method from "x" to "testx?"

2013-08-08 Thread Peter Otten
Peter Otten wrote:

> $ python3 mytestcase_demo.py -v
> test_one (__main__.test_two) ... ok
> test_two (__main__.test_two) ... ok
> 
> --
> Ran 2 tests in 0.000s

Oops, that's an odd class name. Fixing the name clash in Types.__new__() is 
left as an exercise...

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to make a unittest decorator to rename a method from "x" to "testx?"

2013-08-08 Thread Peter Otten
adam.pre...@gmail.com wrote:

> We were coming into Python's unittest module from backgrounds in nunit,
> where they use a decorate to identify tests.  So I was hoping to avoid the
> convention of prepending "test" to the TestClass methods that are to be
> actually run.  I'm sure this comes up all the time, but I mean not to have
> to do:
> 
> class Test(unittest.TestCase):
> def testBlablabla(self):
> self.assertEqual(True, True)
> 
> But instead:
> class Test(unittest.TestCase):
> @test
> def Blablabla(self):
> self.assertEqual(True, True)
> 
> This is admittedly a petty thing.  I have just about given up trying to
> actually deploy a decorator, but I haven't necessarily given up on trying
> to do it for the sake of knowing if it's possible.
> 
> Superficially, you'd think changing a function's __name__ should do the
> trick, but it looks like test discovery happens without looking at the
> transformed function.  I tried a decorator like this:
> 
> def prepend_test(func):
> print "running prepend_test"
> func.__name__ = "test" + func.__name__
> 
> def decorator(*args, **kwargs):
> return func(args, kwargs)
> 
> return decorator
> 
> When running unit tests, I'll see "running prepend_test" show up, but a
> dir on the class being tested doesn't show a renamed function.  I assume
> it only works with instances.  Are there any other tricks I could
> consider?

I think you are misunderstanding what a decorator does. You can think of

def f(...): ...

as syntactic sugar for an assignment

f = make_function(...)

A decorator intercepts that

f = decorator(make_function(...))

and therefore can modify or replace the function object, but has no 
influence on the name binding.

For unittest to allow methods bound to a name not starting with "test" you 
have to write a custom test loader.

import functools
import unittest.loader
import unittest

def test(method):
method.unittest_method = True
return method

class MyLoader(unittest.TestLoader):
   
def getTestCaseNames(self, testCaseClass):
def isTestMethod(attrname, testCaseClass=testCaseClass,
 prefix=self.testMethodPrefix):
attr = getattr(testCaseClass, attrname)
if getattr(attr, "unittest_method", False):
return True
return attrname.startswith(prefix) and callable(attr)

testFnNames = list(filter(isTestMethod, dir(testCaseClass)))
if self.sortTestMethodsUsing:

testFnNames.sort(key=functools.cmp_to_key(self.sortTestMethodsUsing))
return testFnNames

class A(unittest.TestCase):
def test_one(self):
pass
@test
def two(self):
pass

if __name__ == "__main__":
unittest.main(testLoader=MyLoader())

Alternatively you can write a metaclass that *can* intercept the name 
binding process:

$ cat mytestcase.py 
import unittest

__UNITTEST = True

PREFIX = "test_"

class Type(type):
def __new__(class_, name, bases, classdict):
newclassdict = {}
for name, attr in classdict.items():
if getattr(attr, "test", False):
assert not name.startswith(PREFIX)
name = PREFIX + name
assert name not in newclassdict
newclassdict[name] = attr
return type.__new__(class_, name, bases, newclassdict)

class MyTestCase(unittest.TestCase, metaclass=Type):
pass

def test(method):
method.test = True
return method
$ cat mytestcase_demo.py
import unittest
from mytestcase import MyTestCase, test

class T(MyTestCase):
def test_one(self):
pass
@test
def two(self):
pass

if __name__ == "__main__":
unittest.main()
$ python3 mytestcase_demo.py -v
test_one (__main__.test_two) ... ok
test_two (__main__.test_two) ... ok

--
Ran 2 tests in 0.000s

OK


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Is it possible to make a unittest decorator to rename a method from "x" to "testx?"

2013-08-08 Thread Terry Reedy

On 8/8/2013 2:32 AM, adam.pre...@gmail.com wrote:

We were coming into Python's unittest module from backgrounds in nunit, where they use a 
decorate to identify tests.  So I was hoping to avoid the convention of prepending 
"test" to the TestClass methods that are to be actually run.  I'm sure this 
comes up all the time, but I mean not to have to do:

class Test(unittest.TestCase):
 def testBlablabla(self):
 self.assertEqual(True, True)

But instead:
class Test(unittest.TestCase):
 @test
 def Blablabla(self):
 self.assertEqual(True, True)


I cannot help but note that this is *more* typing. But anyhow, something 
like this might work.


def test(f):
f.__class__.__dict__['test_'+f.__name__]

might work. Or maybe for the body just
   setattr(f.__class__, 'test_'+f.__name__)



Superficially, you'd think changing a function's __name__ should do the trick, 
but it looks like test discovery happens without looking at the transformed 
function.


I am guessing that unittest discovery for each class is something like

if isinstance (cls, unittest.TestCase):
  for name, f in cls.__dict__.items():
if name.startswith('test'):
  yield f

You were thinking it would be
...
  for f in cls.__dict__.values():
if f.__name__.startwith('test'):
  yield f

Not ridiculous, but you seem to have disproven it. I believe you can 
take 'name' in the docs to be bound or namespace name rather than 
definition or attribute name.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Is it possible to make a unittest decorator to rename a method from "x" to "testx?"

2013-08-07 Thread adam . preble
We were coming into Python's unittest module from backgrounds in nunit, where 
they use a decorate to identify tests.  So I was hoping to avoid the convention 
of prepending "test" to the TestClass methods that are to be actually run.  I'm 
sure this comes up all the time, but I mean not to have to do:

class Test(unittest.TestCase):
def testBlablabla(self):
self.assertEqual(True, True)

But instead:
class Test(unittest.TestCase):
@test
def Blablabla(self):
self.assertEqual(True, True)

This is admittedly a petty thing.  I have just about given up trying to 
actually deploy a decorator, but I haven't necessarily given up on trying to do 
it for the sake of knowing if it's possible.

Superficially, you'd think changing a function's __name__ should do the trick, 
but it looks like test discovery happens without looking at the transformed 
function.  I tried a decorator like this:

def prepend_test(func):
print "running prepend_test"
func.__name__ = "test" + func.__name__

def decorator(*args, **kwargs):
return func(args, kwargs)

return decorator

When running unit tests, I'll see "running prepend_test" show up, but a dir on 
the class being tested doesn't show a renamed function.  I assume it only works 
with instances.  Are there any other tricks I could consider?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest fails to import module

2013-06-29 Thread Martin Schöön
On 2013-06-29, Steven D'Aprano  wrote:
> On Sat, 29 Jun 2013 19:13:47 +, Martin Schöön wrote:
>
>> $PYTHONPATH points at both the code and the test directories.
>> 
>> When I run blablabla_test.py it fails to import blablabla.py
>
> What error message do you get?
>
>  
>> I have messed around for oven an hour and get nowhere. I have done
>> unittesting like this with success in the past and I have revisited one
>> of those projects and it still works there.
> [...]
>> Any leads?
>
> The first step is to confirm that your path is setup correctly. At the 
> very top of blablabla_test, put this code:
>
> import os, sys
> print(os.getenv('PYTHONPATH'))
> print(sys.path)
>
Yes, right, I had not managed to make my change to PYTHONPATH stick.
I said the explanation would be trivial, didn't I?

Thanks for the quick replies. I am back in business now.

No, neither English nor Python are native languages of mine but I
enjoy (ab)using both :-)

/Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest fails to import module

2013-06-29 Thread Steven D'Aprano
On Sat, 29 Jun 2013 19:13:47 +, Martin Schöön wrote:

> $PYTHONPATH points at both the code and the test directories.
> 
> When I run blablabla_test.py it fails to import blablabla.py

What error message do you get?

 
> I have messed around for oven an hour and get nowhere. I have done
> unittesting like this with success in the past and I have revisited one
> of those projects and it still works there.
[...]
> Any leads?

The first step is to confirm that your path is setup correctly. At the 
very top of blablabla_test, put this code:

import os, sys
print(os.getenv('PYTHONPATH'))
print(sys.path)


What do they say? What should they say?


The second step is to confirm that you can import the blablabla.py 
module. From the command line, cd into the code directory and start up a 
Python interactive session, then run "import blablabla" and see what it 
does.


-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unittest fails to import module

2013-06-29 Thread Roy Smith
In article ,
 Martin Schöön  wrote:

> I know the answer to this must be trivial but I am stuck...
> 
> I am starting on a not too complex Python project. Right now the
> project file structure contains three subdirectories and two
> files with Python code:
> 
> code
>blablabla.py
> test
>blablabla_test.py
> doc
>(empty for now)
> 
> blablabla_test.py contains "import unittest" and "import blablabla"
> 
> $PYTHONPATH points at both the code and the test directories.

A couple of generic debugging suggestions.  First, are you SURE the path 
is set to what you think?  In your unit test, do:

import sys
print sys.path

and make sure it's what you expect it to be.

> When I run blablabla_test.py it fails to import blablabla.py

Get unittest out of the picture.  Run an interactive python and type 
"import blablabla" at it.  What happens?

One trick I like is to strace (aka truss, dtrace, etc on various 
operating systems) the python process and watch all the open() system 
calls.  See what paths it attempts to open when searching for blablabla.  
Sometimes that gives you insight into what's going wrong.

> I have messed around for oven an hour and get nowhere.

What temperature was the oven set at?
-- 
http://mail.python.org/mailman/listinfo/python-list


Unittest fails to import module

2013-06-29 Thread Martin Schöön
I know the answer to this must be trivial but I am stuck...

I am starting on a not too complex Python project. Right now the
project file structure contains three subdirectories and two
files with Python code:

code
   blablabla.py
test
   blablabla_test.py
doc
   (empty for now)

blablabla_test.py contains "import unittest" and "import blablabla"

$PYTHONPATH points at both the code and the test directories.

When I run blablabla_test.py it fails to import blablabla.py

I have messed around for oven an hour and get nowhere. I have
done unittesting like this with success in the past and I have
revisited one of those projects and it still works there.

The older project has a slightly flatter structure as it lacks
a separate code subdirectory:

something.py
test
   something_test.py

I have temporarily tried this on the new project but to no avail.

Any leads?

TIA

/Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: subclassing from unittest

2013-05-23 Thread Terry Jan Reedy

On 5/23/2013 2:58 AM, Ulrich Eckhardt wrote:


Well, per PEP 8, classes use CamelCaps, so your naming might break
automatic test discovery. Then, there might be another thing that could
cause this, and that is that if you have an intermediate class derived
from unittest.TestCase, that class on its own will be considered as test
case! If this is not what you want but you still want common
functionality in a baseclass, create a mixin and then derive from both
the mixin and unittest.TestCase for the actual test cases.


This is now standard practice, gradually being implemented everywhere in 
the CPython test suite, for testing C and Py versions of a module.


class TestXyz():
  mod = None
  

class TestXyz_C(TestXyz, TextCase):  # Test C version
  mod = support.import_fresh_module('_xyz')  # approximately right

class TestXyz_Py(TestXyz, TextCase):  # Test Python version
  mod = support.import_fresh('xyz')

This minimizes duplication and ensures that both implementations get 
exactly the same tests.


tjr


--
http://mail.python.org/mailman/listinfo/python-list


Re: subclassing from unittest

2013-05-23 Thread Roy Smith
In article ,
 Ulrich Eckhardt  wrote:

> if you have an intermediate class derived 
> from unittest.TestCase, that class on its own will be considered as test 
> case! If this is not what you want but you still want common 
> functionality in a baseclass, create a mixin and then derive from both 
> the mixin and unittest.TestCase for the actual test cases.

Or, try another trick I picked up somewhere.  When you're done defining 
your test classes, delete the intermediate base class, so it won't be 
autodiscovered!

class MyBaseTestClass(unittest.TestCase):
   pass

class MyRealTest1(MyBaseTestClass):
   pass

class MyRealTest2(MyBaseTestCalss):
   pass

del MyBaseTestClass
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: subclassing from unittest

2013-05-23 Thread Ulrich Eckhardt

Am 22.05.2013 17:32, schrieb Charles Smith:

I'd like to subclass from unittest.TestCase.  I observed something
interesting and wonder if anyone can explain what's going on... some
subclasses create  null tests.


I can perhaps guess what's going on, though Terry is right: Your 
question isn't very helpful and informative.




I can create this subclass and the test works:

   class StdTestCase (unittest.TestCase):
   blahblah

and I can create this subsubclass and the test works:

   class aaaTestCase (StdTestCase):
   moreblahblah

but if I create this subsubclass (or any where the first letter is
capital):

   class AaaTestCase (StdTestCase):
   differentblahblah

the test completes immediately without any work being done.


Well, per PEP 8, classes use CamelCaps, so your naming might break 
automatic test discovery. Then, there might be another thing that could 
cause this, and that is that if you have an intermediate class derived 
from unittest.TestCase, that class on its own will be considered as test 
case! If this is not what you want but you still want common 
functionality in a baseclass, create a mixin and then derive from both 
the mixin and unittest.TestCase for the actual test cases.


Good luck!

Uli

--
http://mail.python.org/mailman/listinfo/python-list


Re: subclassing from unittest

2013-05-22 Thread Terry Jan Reedy

On 5/22/2013 11:32 AM, Charles Smith wrote:

Have you red this? I will suggest some specifics.
http://www.catb.org/esr/faqs/smart-questions.html


I'd like to subclass from unittest.TestCase.


What version of Python.

> I observed something interesting and wonder if anyone can explain 
what's going on... some

subclasses create  null tests.

I can create this subclass and the test works:


What does 'works' mean?


   class StdTestCase (unittest.TestCase):
   blahblah


I bet that this (and the rest of your 'code' is not what you actually 
ran. Unless blahblah is bound (to what?), this fails with NameError.

Give us what you ran so we can run it too, and modify it.


and I can create this subsubclass and the test works:

   class aaaTestCase (StdTestCase):
   moreblahblah

but if I create this subsubclass (or any where the first letter is
capital):

   class AaaTestCase (StdTestCase):
   differentblahblah

the test completes immediately without any work being done.


What does this mean? I see no difference with the following

import unittest
class StdTestCase (unittest.TestCase): pass
class lowerSub(StdTestCase): pass
class UpperSub(StdTestCase): pass

unittest.main(verbosity=2, exit=False)

# prints (3.3)
--
Ran 0 tests in 0.000s

OK

Same as before the subclasses were added.

--
Terry Jan Reedy


--
http://mail.python.org/mailman/listinfo/python-list


Re: subclassing from unittest

2013-05-22 Thread Charles Smith
On 22 Mai, 17:32, Charles Smith  wrote:
> Hi,
>
> I'd like to subclass from unittest.TestCase.  I observed something
> interesting and wonder if anyone can explain what's going on... some
> subclasses create  null tests.
>
> I can create this subclass and the test works:
>
>   class StdTestCase (unittest.TestCase):
>       blahblah
>
> and I can create this subsubclass and the test works:
>
>   class aaaTestCase (StdTestCase):
>       moreblahblah
>
> but if I create this subsubclass (or any where the first letter is
> capital):
>
>   class AaaTestCase (StdTestCase):
>       differentblahblah
>
> the test completes immediately without any work being done.
>
> I suspect that the answer is in the prefix printed out by the test.  I
> have diffed both the long output (tests works, on the left) and the
> short output (null test, on the right):
>
> test
> (TC_02.TestCase_F__ULLA05__AM_Tx) ...                  <
> test suite has   tests=[]>,       test suite has   tests=[,
>
> >    ,
>
>   tests=[    tests=[    tests=[    tests=[    tests=[    tests=[    tests=[]>]>                                        <
>
> >   -->  test_api_socket:the address specified is:  127.0.0.1
>
> --
>
> >   Ran 0 tests in 0.000s
>
> >   OK
>
> I see an empty test somehow gets sorted to the beginning of the list.
> How could that be a result of whether the first letter of the class is
> capitalized or not?
>
> Thanks in advance...
> cts
>
> http://www.creative-telcom-solutions.de


Unfortunately, the side-by-side diff didn't come out so well  ...

---
http://www.creative-telcom-solutions.de
-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   5   6   >