[issue24932] Migrate _testembed to a C unit testing library

2016-12-30 Thread Steve Dower

Steve Dower added the comment:

The only real advantage of adding a native unit testing framework here is to 
avoid having to start/destroy _testembed[.exe] multiple times during the test 
run. But given the nature of these tests is highly environmental, I don't think 
we can reasonably avoid it. I'm also highly doubtful that any framework is 
going to actually reduce the work we'd need to do to mock out initialization 
steps.

I've attached a patch that takes the first step towards making _testembed more 
usable, by changing main() to look up a test from a list and execute it, then 
return the exit code. The support in test_capi.EmbeddingTests is neat, so 
adding more tests here will be fairly straightforward.

--
keywords: +patch
nosy: +steve.dower
versions: +Python 3.7
Added file: http://bugs.python.org/file46093/24932_1.patch

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue24932>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Help: Python unit testing

2016-07-05 Thread Harsh Gupta
Hello All,

I have been trying to write a test framework for a pretty simple command server 
application in python. I have not been able to figure out how to test the 
socket server.
I would really appreciate if you could help me out in testing this application 
using unittest. I'm new to this.
Please find the commandserver.py file . 

'''
import socket
import time

supported_commands = {'loopback', 'print', 'close'}

class CommandServer:

def __init__(self, ip='localhost', port=5000):

self.server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.server.bind((ip, port))

print "Server running"
self.is_running = True

def start_listening(self, num_connections=2):
self.server.listen(num_connections)
self._accept()

def is_running(self):
return self.is_running

def _accept(self):
self.client, self.remote_addr = self.server.accept()

print "Received connection from %s" % (self.remote_addr,)
self.is_connected = True

def _get_data(self, bufsize=32):
try:
return self.client.recv(bufsize)
except KeyboardInterrupt:
self.close()

def wait_for_commands(self):
while self.is_connected:
cmd = self._get_data()
if cmd:
if cmd not in supported_commands:
print "Received unsupported command"
continue

if cmd in ("loopback", "print"):
payload = self._get_data(512)

if cmd == "loopback":
self.client.sendall(payload)

elif cmd == "print":
print payload

elif cmd == "close":
self.close()

def close(self):
self.client.close()
self.server.close()
self.is_connected = False
print "Connection closed"

if __name__ == '__main__':

cmd_server = CommandServer()
cmd_server.start_listening()
cmd_server.wait_for_commands()

'''

The tests cases I came up with are:
1. Start the server
2. Establish connection with client
3. Test the server by sending 3 commands
- loopback command should  send all payload.
- print command should print payload
- close command should close the connection between server and client.

4. Raise error if any other command is send other than listed 3 commands

Other nominal cases
1. Check if both the server and client are closed. It might happen that only 
one is closed.
2. Go to sleep/Cancel the connection if no command is received after a certain 
period of time.

Thank You

Harsh
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue24932] Migrate _testembed to a C unit testing library

2015-08-25 Thread Brett Cannon

Brett Cannon added the comment:

Someone is going to think of [googletest] 
(https://github.com/google/googletest) and then realize that it is a C++ test 
suite and thus won't work unless you explicitly compile Python for C++.

--
nosy: +brett.cannon

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue24932
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue24932] Migrate _testembed to a C unit testing library

2015-08-24 Thread Nick Coghlan

New submission from Nick Coghlan:

Programs/_testembed (invoked by test_capi to test CPython's embedding support) 
is currently a very simple application with only two different embedding tests: 
https://hg.python.org/cpython/file/tip/Programs/_testembed.c

In light of proposals like PEP 432 to change the interpreter startup sequence 
and make it more configurable, it seems desirable to be better able to test 
more configuration options directly, without relying on the abstraction layer 
provided by the main CPython executable.

The specific unit testing library that prompted this idea was cmocka, which is 
used by libssh, sssd and cwrap: https://cmocka.org/

cwrap in turn is used by the Samba team to mock out network interfaces and 
other operations using LD_PRELOAD: https://cwrap.org/

We don't necessarily have to use those particular libraries, I'm just filing 
this issue to capture the idea of improving our C level unit testing 
capabilities, and then updating regrtest to better capture and report those 
results (currently there are just a couple of tests in test_capi that call the 
_testembed executable)

--
components: Tests
messages: 249106
nosy: encukou, eric.snow, ncoghlan
priority: normal
severity: normal
status: open
title: Migrate _testembed to a C unit testing library
type: enhancement

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue24932
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Score one for unit testing.

2014-01-02 Thread Roy Smith
We've got a test that's been running fine ever since it was written a 
month or so ago.  Now, it's failing intermittently on our CI (continuous 
integration) box, so I took a look.

It turns out it's a stupid test because it depends on pre-existing data 
in the database.  But, the win is that while looking at the code to 
figure out why this was failing, I noticed a completely unrelated bug in 
the production code.

See, unit testing helps find bugs :-)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Score one for unit testing.

2014-01-02 Thread Chris Angelico
On Fri, Jan 3, 2014 at 9:53 AM, Roy Smith r...@panix.com wrote:
 We've got a test that's been running fine ever since it was written a
 month or so ago.  Now, it's failing intermittently on our CI (continuous
 integration) box, so I took a look.

I recommend you solve these problems the way these folks did:

http://thedailywtf.com/Articles/Productive-Testing.aspx

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Unit testing asynchronous processes

2013-12-10 Thread Tim Chase
I've got some code that kicks off a background request to a remote
server over an SSL connection using client-side certificates.  Since
the request is made from a separate thread, I'm having trouble testing
that everything is working without without spinning up an out-of-band
mock server and actually making all the request.  Are there some best
practices for testing/mocking when things are asynchronous and
involve SSL connections to servers?

I'm currently just ignoring the SSL/client-cert thing and trusting my
calls work. But I'd like to at least have some tests that cover these
aspects so that things don't fall through the cracks.

Thanks,

-tkc


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unit testing asynchronous processes

2013-12-10 Thread Terry Reedy

On 12/10/2013 9:24 PM, Tim Chase wrote:

I've got some code that kicks off a background request to a remote
server over an SSL connection using client-side certificates.  Since
the request is made from a separate thread, I'm having trouble testing
that everything is working without without spinning up an out-of-band
mock server and actually making all the request.  Are there some best
practices for testing/mocking when things are asynchronous and
involve SSL connections to servers?

I'm currently just ignoring the SSL/client-cert thing and trusting my
calls work. But I'd like to at least have some tests that cover these
aspects so that things don't fall through the cracks.


Take a look in the Python test suite (lib/test/test_xyz)

--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: unit testing class hierarchies

2012-10-04 Thread Terry Reedy

On 10/3/2012 5:33 AM, Oscar Benjamin wrote:

On 3 October 2012 02:20, Steven D'Aprano
steve+comp.lang.pyt...@pearwood.info wrote:


But surely, regardless of where that functionality is defined, you still
need to test that both D1 and D2 exhibit the correct behaviour? Otherwise
D2 (say) may break that functionality and your tests won't notice.

Given a class hierarchy like this:

class AbstractBaseClass:
 spam = spam

class D1(AbstractBaseClass): pass
class D2(D1): pass


I write tests like this:

class TestD1CommonBehaviour(unittest.TestCase):
 cls = D1
 def testSpam(self):
  self.assertTrue(self.cls.spam == spam)
 def testHam(self):
  self.assertFalse(hasattr(self.cls, 'ham'))

class TestD2CommonBehaviour(TestD1CommonBehaviour):
 cls = D2


That's an excellent idea. I wanted a convenient way to run the same
tests on two classes in order to test both a pure python and a
cython-accelerator module implementation of the same class.


Python itself has same issue with testing Python and C coded modules. It 
has the additional issue that the Python class by default import the C 
version, so additional work is needed to avoid that and actually test 
the python code.


For instance, heapq.test_heapq.py has

...
py_heapq = support.import_fresh_module('heapq', blocked=['_heapq'])
c_heapq = support.import_fresh_module('heapq', fresh=['_heapq'])
...
class TestHeap(TestCase):
module = None
... multiple test methods for functions module.xxx

class TestHeapPython(TestHeap):
module = py_heapq

@skipUnless(c_heapq, 'requires _heapq')
class TestHeapC(TestHeap):
module = c_heapq
...
def test_main(verbose=None):
test_classes = [TestModules, TestHeapPython, TestHeapC,

# TestHeap is omitted from the list and not run directly

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing class hierarchies

2012-10-03 Thread Oscar Benjamin
On 3 October 2012 02:20, Steven D'Aprano
steve+comp.lang.pyt...@pearwood.info wrote:

 But surely, regardless of where that functionality is defined, you still
 need to test that both D1 and D2 exhibit the correct behaviour? Otherwise
 D2 (say) may break that functionality and your tests won't notice.

 Given a class hierarchy like this:

 class AbstractBaseClass:
 spam = spam

 class D1(AbstractBaseClass): pass
 class D2(D1): pass


 I write tests like this:

 class TestD1CommonBehaviour(unittest.TestCase):
 cls = D1
 def testSpam(self):
  self.assertTrue(self.cls.spam == spam)
 def testHam(self):
  self.assertFalse(hasattr(self.cls, 'ham'))

 class TestD2CommonBehaviour(TestD1CommonBehaviour):
 cls = D2

That's an excellent idea. I wanted a convenient way to run the same
tests on two classes in order to test both a pure python and a
cython-accelerator module implementation of the same class. I find it
difficult to work out how to do such simple things with unittest
because of its Java-like insistence on organising all tests into
classes. I can't immediately remember what solution I came up with but
yours is definitely better.


Oscar
-- 
http://mail.python.org/mailman/listinfo/python-list


unit testing class hierarchies

2012-10-02 Thread Ulrich Eckhardt

Greetings!

I'm trying to unittest a class hierachy using Python 2.7. I have a 
common baseclass Base and derived classes D1 and D2 that I want to test. 
The baseclass in not instantiatable on its own. Now, the first approach 
is to have test cases TestD1 and TestD2, both derived from class TestCase:


class TestD1(unittest.TestCase):
def test_base(self):
...
def test_r(self):
...
def test_s(self):
...

class TestD2(unittest.TestCase):
def test_base(self):
# same as above
...
def test_x(self):
...
def test_y(self):
...

As you see, the code for test_base() is redundant, so the idea is to 
move it to a baseclass:


class TestBase(unittest.TestCase):
def test_base(self):
...

class TestD1(TestBase):
def test_r(self):
...
def test_s(self):
...

class TestD2(TestBase):
def test_x(self):
...
def test_y(self):
...

The problem here is that TestBase is not a complete test case (just as 
class Base is not complete), but the unittest framework will still try 
to run it on its own. One way around this is to not derive class 
TestBase from unittest.TestCase but instead use multiple inheritance in 
the derived classes [1]. Maybe it's just my personal gut feeling, but I 
don't like that solution, because it is not obvious that this class 
actually needs to be combined with a TestCase class in order to 
function. I would rather tell the unittest framework directly that it's 
not supposed to consider this intermediate class as a test case, but 
couldn't find a way to express that clearly.


How would you do this?

Uli


[1] in C++ I would call that a mixin

--
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing class hierarchies

2012-10-02 Thread Demian Brecht

[1] in C++ I would call that a mixin


Mixins are perfectly valid Python constructs as well and are perfectly 
valid (imho) for this use case.


On a side note, I usually append a Mixin suffix to my mixin classes in 
order to make it obvious to the reader.




--
Demian Brecht
@demianbrecht
http://demianbrecht.github.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing class hierarchies

2012-10-02 Thread Thomas Bach
On Tue, Oct 02, 2012 at 02:27:11PM +0200, Ulrich Eckhardt wrote:
 As you see, the code for test_base() is redundant, so the idea is to
 move it to a baseclass:
 
 class TestBase(unittest.TestCase):
 def test_base(self):
 ...
 
 class TestD1(TestBase):
 def test_r(self):
 ...
 def test_s(self):
 ...
 
 class TestD2(TestBase):
 def test_x(self):
 ...
 def test_y(self):
 ...

Could you provide more background? How do you avoid that test_base()
runs in TestD1 or TestD2?

To me it sounds like test_base() is actually no test. Hence, I would
rather give it a catchy name like _build_base_cls().  If a method name
does not start with 'test' it is not considered a test to run
automatically.

Does this help?

Regards,
Thomas Bach.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing class hierarchies

2012-10-02 Thread Peter Otten
Ulrich Eckhardt wrote:

 As you see, the code for test_base() is redundant, so the idea is to
 move it to a baseclass:
 
 class TestBase(unittest.TestCase):
  def test_base(self):
  ...
 
 class TestD1(TestBase):
  def test_r(self):
  ...
  def test_s(self):
  ...
 
 class TestD2(TestBase):
  def test_x(self):
  ...
  def test_y(self):
  ...
 
 The problem here is that TestBase is not a complete test case (just as
 class Base is not complete), but the unittest framework will still try
 to run it on its own. One way around this is to not derive class
 TestBase from unittest.

Another is to remove it from the global namespace with 

del TestBase



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing class hierarchies

2012-10-02 Thread Fayaz Yusuf Khan
Peter Otten wrote:

 Ulrich Eckhardt wrote:
 The problem here is that TestBase is not a complete test case (just 
as
 class Base is not complete), but the unittest framework will still 
try
 to run it on its own.
How exactly are you invoking the test runner? unittest? nose? You can 
tell the test discoverer which classes you want it to run and which 
ones you don't. For the unittest library, I use my own custom 
load_tests methods:
def load_tests(loader, tests, pattern):
testcases = [TestD1, TestD2]
return TestSuite([loader.loadTestsFromTestCase(testcase)
  for testcase in testcases])
http://docs.python.org/library/unittest.html#load-tests-protocol

 One way around this is to not derive class
 TestBase from unittest.
 
 Another is to remove it from the global namespace with
 
 del TestBase
Removing the class from namespace may or may not help. Consider a 
scenario where someone decided to be creative with the cls.__bases__ 
attribute.

-- 
Fayaz Yusuf Khan
Cloud architect, Dexetra SS, India
fayaz.yusuf.khan_AT_gmail_DOT_com, fayaz_AT_dexetra_DOT_com
+91-9746-830-823

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing class hierarchies

2012-10-02 Thread Ulrich Eckhardt

Am 02.10.2012 16:06, schrieb Thomas Bach:

On Tue, Oct 02, 2012 at 02:27:11PM +0200, Ulrich Eckhardt wrote:

As you see, the code for test_base() is redundant, so the idea is to
move it to a baseclass:

class TestBase(unittest.TestCase):
 def test_base(self):
 ...

class TestD1(TestBase):
 def test_r(self):
 ...
 def test_s(self):
 ...

class TestD2(TestBase):
 def test_x(self):
 ...
 def test_y(self):
 ...


Could you provide more background? How do you avoid that test_base()
runs in TestD1 or TestD2?


Sorry, there's a misunderstanding: I want test_base() to be run as part 
of both TestD1 and TestD2, because it tests basic functions provided by 
both class D1 and D2.


Uli

--
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing class hierarchies

2012-10-02 Thread Ulrich Eckhardt

Am 02.10.2012 16:06, schrieb Thomas Bach:

On Tue, Oct 02, 2012 at 02:27:11PM +0200, Ulrich Eckhardt wrote:

As you see, the code for test_base() is redundant, so the idea is to
move it to a baseclass:

class TestBase(unittest.TestCase):
 def test_base(self):
 ...

class TestD1(TestBase):
 def test_r(self):
 ...
 def test_s(self):
 ...

class TestD2(TestBase):
 def test_x(self):
 ...
 def test_y(self):
 ...


Could you provide more background? How do you avoid that test_base()
runs in TestD1 or TestD2?


Sorry, there's a misunderstanding: I want test_base() to be run as part 
of both TestD1 and TestD2, because it tests basic functions provided by 
both classes D1 and D2. The instances of D1 and D2 are created in 
TestD1.setUp and TestD2.setUp and then used by all tests. There is no 
possible implementation creating such an instance for TestBase, since 
the baseclass is abstract.


Last edit for today, I hope that makes my intentions clear...

;)

Uli

--
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing class hierarchies

2012-10-02 Thread Peter Otten
Ulrich Eckhardt wrote:

 Am 02.10.2012 16:06, schrieb Thomas Bach:
 On Tue, Oct 02, 2012 at 02:27:11PM +0200, Ulrich Eckhardt wrote:
 As you see, the code for test_base() is redundant, so the idea is to
 move it to a baseclass:

 class TestBase(unittest.TestCase):
  def test_base(self):
  ...

 class TestD1(TestBase):
  def test_r(self):
  ...
  def test_s(self):
  ...

 class TestD2(TestBase):
  def test_x(self):
  ...
  def test_y(self):
  ...

 Could you provide more background? How do you avoid that test_base()
 runs in TestD1 or TestD2?
 
 Sorry, there's a misunderstanding: I want test_base() to be run as part
 of both TestD1 and TestD2, because it tests basic functions provided by
 both classes D1 and D2. The instances of D1 and D2 are created in
 TestD1.setUp and TestD2.setUp and then used by all tests. There is no
 possible implementation creating such an instance for TestBase, since
 the baseclass is abstract.
 
 Last edit for today, I hope that makes my intentions clear...
 
 ;)

Ceterum censeo baseclassinem esse delendam ;)

$ cat test_shared.py
import unittest

class Shared(unittest.TestCase):
def test_shared(self):
pass

class D1(Shared):
def test_d1_only(self):
pass

class D2(Shared):
def test_d2_only(self):
pass

del Shared

unittest.main()

$ python test_shared.py -v
test_d1_only (__main__.D1) ... ok
test_shared (__main__.D1) ... ok
test_d2_only (__main__.D2) ... ok
test_shared (__main__.D2) ... ok

--
Ran 4 tests in 0.000s

OK
$ 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing class hierarchies

2012-10-02 Thread Peter Otten
Fayaz Yusuf Khan wrote:

 Peter Otten wrote:
 
 Ulrich Eckhardt wrote:
 The problem here is that TestBase is not a complete test case (just
 as
 class Base is not complete), but the unittest framework will still
 try
 to run it on its own.
 How exactly are you invoking the test runner? unittest? nose? You can
 tell the test discoverer which classes you want it to run and which
 ones you don't. For the unittest library, I use my own custom
 load_tests methods:
 def load_tests(loader, tests, pattern):
 testcases = [TestD1, TestD2]
 return TestSuite([loader.loadTestsFromTestCase(testcase)
   for testcase in testcases])
 http://docs.python.org/library/unittest.html#load-tests-protocol
 
 One way around this is to not derive class
 TestBase from unittest.
 
 Another is to remove it from the global namespace with
 
 del TestBase
 Removing the class from namespace may or may not help. Consider a
 scenario where someone decided to be creative with the cls.__bases__
 attribute.

Isn't that a bit far-fetched? I'd rather start simple and fix problems as 
they arise... 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing class hierarchies

2012-10-02 Thread Demian Brecht
Am I missing something? Is there something that wasn't answered by my reply
about using mixins?

from unittest import TestCase

class SharedTestMixin(object):
def test_shared(self):
self.assertNotEquals('foo', 'bar')

class TestA(TestCase, SharedTestMixin):
def test_a(self):
self.assertEquals('a', 'a')

class TestB(TestCase, SharedTestMixin):
def test_b(self):
self.assertEquals('b', 'b')

$ nosetests test.py -v
test_a (test.TestA) ... ok
test_shared (test.TestA) ... ok
test_b (test.TestB) ... ok
test_shared (test.TestB) ... ok

--
Ran 4 tests in 0.001s

OK

This seems to be a clear answer to the problem that solves the original
requirements without introducing error-prone, non-obvious solutions.



 Sorry, there's a misunderstanding: I want test_base() to be run as part of
 both TestD1 and TestD2, because it tests basic functions provided by both
 classes D1 and D2. The instances of D1 and D2 are created in TestD1.setUp
 and TestD2.setUp and then used by all tests. There is no possible
 implementation creating such an instance for TestBase, since the baseclass
 is abstract.

 Last edit for today, I hope that makes my intentions clear...

 ;)

 Uli

 --
 http://mail.python.org/**mailman/listinfo/python-listhttp://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing class hierarchies

2012-10-02 Thread Demian Brecht
Am I missing something? Is there something that wasn't answered by my reply
about using mixins?

from unittest import TestCase

class SharedTestMixin(object):
def test_shared(self):
self.assertNotEquals('foo', 'bar')

class TestA(TestCase, SharedTestMixin):
def test_a(self):
self.assertEquals('a', 'a')

class TestB(TestCase, SharedTestMixin):
def test_b(self):
self.assertEquals('b', 'b')

$ nosetests test.py -v
test_a (test.TestA) ... ok
test_shared (test.TestA) ... ok
test_b (test.TestB) ... ok
test_shared (test.TestB) ... ok

--
Ran 4 tests in 0.001s

OK

This seems to be a clear answer to the problem that solves the original
requirements without introducing error-prone, non-obvious solutions.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing class hierarchies

2012-10-02 Thread Mark Lawrence

On 02/10/2012 19:06, Demian Brecht wrote:

Am I missing something? Is there something that wasn't answered by my reply
about using mixins?

from unittest import TestCase

class SharedTestMixin(object):
 def test_shared(self):
 self.assertNotEquals('foo', 'bar')

class TestA(TestCase, SharedTestMixin):
 def test_a(self):
 self.assertEquals('a', 'a')

class TestB(TestCase, SharedTestMixin):
 def test_b(self):
 self.assertEquals('b', 'b')

$ nosetests test.py -v
test_a (test.TestA) ... ok
test_shared (test.TestA) ... ok
test_b (test.TestB) ... ok
test_shared (test.TestB) ... ok

--
Ran 4 tests in 0.001s

OK

This seems to be a clear answer to the problem that solves the original
requirements without introducing error-prone, non-obvious solutions.





Peter Otten's response is obviously vastly superior to yours, 4 tests in 
0.000s compared to your highly inefficient 4 tests in 0.001s :)


--
Cheers.

Mark Lawrence.

--
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing class hierarchies

2012-10-02 Thread Ben Finney
Ulrich Eckhardt ulrich.eckha...@dominolaser.com writes:

 I want test_base() to be run as part of both TestD1 and TestD2,
 because it tests basic functions provided by both classes D1 and D2.

It sounds, from your description so far, that you have identified a
design flaw in D1 and D2.

The common functionality should be moved to a common code point (maybe a
base class of D1 and D2; maybe a function without need of a class). Then
you'll have only one occurrence of that functionality to test, which is
good design as well as easier test code :-)

-- 
 \  “When I was a little kid we had a sand box. It was a quicksand |
  `\   box. I was an only child... eventually.” —Steven Wright |
_o__)  |
Ben Finney

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing class hierarchies

2012-10-02 Thread Roy Smith
In article mailman.1734.1349199947.27098.python-l...@python.org,
 Peter Otten __pete...@web.de wrote:

  Another is to remove it from the global namespace with
  
  del TestBase

When I had this problem, that's the solution I used.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing class hierarchies

2012-10-02 Thread Steven D'Aprano
On Wed, 03 Oct 2012 08:30:19 +1000, Ben Finney wrote:

 Ulrich Eckhardt ulrich.eckha...@dominolaser.com writes:
 
 I want test_base() to be run as part of both TestD1 and TestD2, because
 it tests basic functions provided by both classes D1 and D2.
 
 It sounds, from your description so far, that you have identified a
 design flaw in D1 and D2.
 
 The common functionality should be moved to a common code point (maybe a
 base class of D1 and D2; maybe a function without need of a class). Then
 you'll have only one occurrence of that functionality to test, which is
 good design as well as easier test code :-)

But surely, regardless of where that functionality is defined, you still 
need to test that both D1 and D2 exhibit the correct behaviour? Otherwise 
D2 (say) may break that functionality and your tests won't notice.

Given a class hierarchy like this:

class AbstractBaseClass:
spam = spam

class D1(AbstractBaseClass): pass
class D2(D1): pass


I write tests like this:

class TestD1CommonBehaviour(unittest.TestCase):
cls = D1
def testSpam(self):
 self.assertTrue(self.cls.spam == spam)
def testHam(self):
 self.assertFalse(hasattr(self.cls, 'ham'))

class TestD2CommonBehaviour(TestD1CommonBehaviour):
cls = D2

class TestD1SpecialBehaviour(unittest.TestCase):
# D1 specific tests here

class TestD2SpecialBehaviour(unittest.TestCase):
# D2 specific tests here



If D2 doesn't inherit from D1, but both from AbstractBaseClass, I need to 
do a little more work. First, in the test suite I create a subclass 
specifically for testing the common behaviour, write tests for that, then 
subclass from that:


class MyD(AbstractBaseClass):
# Defeat the prohibition on instantiating the base class
pass

class TestCommonBehaviour(unittest.TestCase):
cls = MyD
def testSpam(self):
 self.assertTrue(self.cls.spam == spam)
def testHam(self):
 self.assertFalse(hasattr(self.cls, 'ham'))

class TestD1CommonBehaviour(unittest.TestCase):
cls = D1

class TestD2CommonBehaviour(unittest.TestCase):
cls = D2

D1 and D2 specific tests remain the same.


Either way, each class gets tested for the full set of expected 
functionality.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit-profiling, similar to unit-testing

2011-11-17 Thread Ulrich Eckhardt

Am 16.11.2011 15:36, schrieb Roy Smith:

It's really, really, really hard to either control for, or accurately
measure, things like CPU or network load.  There's so much stuff you
can't even begin to see.  The state of your main memory cache.  Disk
fragmentation.  What I/O is happening directly out of kernel buffers vs
having to do a physical disk read.  How slow your DNS server is today.


Fortunately, I am in a position where I'm running tests on one system 
(generic desktop PC) while the system to test is another one, and there 
both hardware and software is under my control. Since this is rather 
smallish and embedded, the power and load of the desktop don't play a 
significant role, the other side is usually the bottleneck. ;)




What I suggest is instrumenting your unit test suite to record not just
the pas/fail status of every test, but also the test duration.  Stick
these into a database as the tests run.  Over time, you will accumulate
a whole lot of performance data, which you can then start to mine.


I'm not sure. I see unit tests as something that makes sure things run 
correctly. For performance testing, I have functions to set up and tear 
down the environment. Then, I found it useful to have separate code to 
prime a cache, which is something done before each test run, but which 
is not part of the test run itself. I'm repeating each test run N times, 
recording the times and calculating maximum, minimum, average and 
standard deviation. Some of this is similar to unit testing (code to set 
up/tear down), but other things are too different. Also, sometimes I can 
vary tests with a factor F, then I would also want to capture the 
influence of this factor. I would even wonder if you can't verify the 
behaviour agains an expected Big-O complexity somehow.


All of this is rather general, not specific to my use case, hence my 
question if there are existing frameworks to facilitate this task. Maybe 
it's time to create one...




While you're running the tests, gather as much system performance data
as you can (output of top, vmstat, etc) and stick that into your
database too.  You never know when you'll want to refer to the data, so
just collect it all and save it forever.


Yes, this is surely something that is necessary, in particular since 
there are no clear success/failure outputs like for unit tests and they 
require a human to interpret them.



Cheers!

Uli

--
http://mail.python.org/mailman/listinfo/python-list


Re: unit-profiling, similar to unit-testing

2011-11-17 Thread Roy Smith
In article kkuep8-nqd@satorlaser.homedns.org,
 Ulrich Eckhardt ulrich.eckha...@dominolaser.com wrote:

 Yes, this is surely something that is necessary, in particular since 
 there are no clear success/failure outputs like for unit tests and they 
 require a human to interpret them.

As much as possible, you want to automate things so no human 
intervention is required.

For example, let's say you have a test which calls foo() and times how 
long it takes.  You've already mentioned that you run it N times and 
compute some basic (min, max, avg, sd) stats.  So far, so good.

The next step is to do some kind of regression against past results.  
Once you've got a bunch of historical data, it should be possible to 
look at today's numbers and detect any significant change in performance.

Much as I loathe the bureaucracy and religious fervor which has grown up 
around Six Sigma, it does have some good tools.  You might want to look 
into control charts (http://en.wikipedia.org/wiki/Control_chart).  You 
think you've got the test environment under control, do you?  Try 
plotting a month's worth of run times for a particular test on a control 
chart and see what it shows.

Assuming your process really is under control, I would write scripts 
that did the following kinds of analysis:

1) For a given test, do a linear regression of run time vs date.  If the 
line has any significant positive slope, you want to investigate why.

2) You already mentioned, I would even wonder if you can't verify the 
behaviour agains an expected Big-O complexity somehow.  Of course you 
can.  Run your test a bunch of times with different input sizes.  I 
would try something like a 1-2-5 progression over several decades (i.e. 
input sizes of 10, 20, 50, 100, 200, 500, 1000, etc)  You will have to 
figure out what an appropriate range is, and how to generate useful 
input sets.  Now, curve fit your performance numbers to various shape 
curves and see what correlation coefficient you get.

All that being said, in my experience, nothing beats plotting your data 
and looking at it.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit-profiling, similar to unit-testing

2011-11-17 Thread Tycho Andersen
On Wed, Nov 16, 2011 at 09:36:40AM -0500, Roy Smith wrote:
 In article 95bcp8-bft@satorlaser.homedns.org,
  Ulrich Eckhardt ulrich.eckha...@dominolaser.com wrote:
 
  Hi!
  
  I'm currently trying to establish a few tests here that evaluate certain 
  performance characteristics of our systems. As part of this, I found 
  that these tests are rather similar to unit-tests, only that they are 
  much more fuzzy and obviously dependent on the systems involved, CPU 
  load, network load, day of the week (Tuesday is virus scan day) etc.
  
  What I'd just like to ask is how you do such things. Are there tools 
  available that help? I was considering using the unit testing framework, 
  but the problem with that is that the results are too hard to interpret 
  programmatically and too easy to misinterpret manually. Any suggestions?
 
 It's really, really, really hard to either control for, or accurately 
 measure, things like CPU or network load.  There's so much stuff you 
 can't even begin to see.  The state of your main memory cache.  Disk 
 fragmentation.  What I/O is happening directly out of kernel buffers vs 
 having to do a physical disk read.  How slow your DNS server is today.

While I agree there's a lot of things you can't control for, you can
get a more accurate picture by using CPU time instead of wall time
(e.g. the clock() system call). If what you care about is mostly CPU
time, you can control for the your disk is fragmented, your DNS
server died, or my cow-orker was banging on the test machine this
way.

\t
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit-profiling, similar to unit-testing

2011-11-17 Thread spartan.the
On Nov 17, 4:03 pm, Roy Smith r...@panix.com wrote:
 In article kkuep8-nqd@satorlaser.homedns.org,
  Ulrich Eckhardt ulrich.eckha...@dominolaser.com wrote:

  Yes, this is surely something that is necessary, in particular since
  there are no clear success/failure outputs like for unit tests and they
  require a human to interpret them.

 As much as possible, you want to automate things so no human
 intervention is required.

 For example, let's say you have a test which calls foo() and times how
 long it takes.  You've already mentioned that you run it N times and
 compute some basic (min, max, avg, sd) stats.  So far, so good.

 The next step is to do some kind of regression against past results.
 Once you've got a bunch of historical data, it should be possible to
 look at today's numbers and detect any significant change in performance.

 Much as I loathe the bureaucracy and religious fervor which has grown up
 around Six Sigma, it does have some good tools.  You might want to look
 into control charts (http://en.wikipedia.org/wiki/Control_chart).  You
 think you've got the test environment under control, do you?  Try
 plotting a month's worth of run times for a particular test on a control
 chart and see what it shows.

 Assuming your process really is under control, I would write scripts
 that did the following kinds of analysis:

 1) For a given test, do a linear regression of run time vs date.  If the
 line has any significant positive slope, you want to investigate why.

 2) You already mentioned, I would even wonder if you can't verify the
 behaviour agains an expected Big-O complexity somehow.  Of course you
 can.  Run your test a bunch of times with different input sizes.  I
 would try something like a 1-2-5 progression over several decades (i.e.
 input sizes of 10, 20, 50, 100, 200, 500, 1000, etc)  You will have to
 figure out what an appropriate range is, and how to generate useful
 input sets.  Now, curve fit your performance numbers to various shape
 curves and see what correlation coefficient you get.

 All that being said, in my experience, nothing beats plotting your data
 and looking at it.

I strongly agree with Roy, here.

Ulrich, I recommend you to explore how google measures appengine's
health here: http://code.google.com/status/appengine.

Unit tests are inappropriate here; any single unit test can answer
PASS or FAIL, YES or NO. It can't answer the question how much.
Unless you just want to use unit tests. Then any arguments here just
don't make sense.

I suggest:

1. Decide what you want to measure. Measure result must be a number in
range (0..100, -5..5), so you can plot them.
2. Write no-UI programs to get each number (measure) and write it to
CSV. Run each of them several times take away 1 worst and 1 best
result, and take and average number.
3. Collect the data for some period of time.
4. Plot those average number over time axis (it's easy with CSV
format).
5. Make sure you automate this process (batch files or so) so the plot
is generated automatically each hour or each day.

And then after a month you can decide if you want to divide your
number ranges into green-yellow-red zones. More often than not you may
find that your measures are so inaccurate and random that you can't
trust them. Then you'll either forget that or dive into math
(statistics). You have about 5% chances to succeed ;)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit-profiling, similar to unit-testing

2011-11-17 Thread Roy Smith
In article mailman.2810.1321562763.27778.python-l...@python.org,
 Tycho Andersen ty...@tycho.ws wrote:

 While I agree there's a lot of things you can't control for, you can
 get a more accurate picture by using CPU time instead of wall time
 (e.g. the clock() system call). If what you care about is mostly CPU
 time [...]

That's a big if.  In some cases, CPU time is important, but more often, 
wall-clock time is more critical.  Let's say I've got two versions of a 
program.  Here's some results for my test run:

Version CPU Time Wall-Clock Time
   1 2 hours   2.5 hours
   2 1.5 hours 5.0 hours

Between versions, I reduced the CPU time to complete the given task, but 
increased the wall clock time.  Perhaps I doubled the size of some hash 
table.  Now I get a lot fewer hash collisions (so I spend less CPU time 
re-hashing), but my memory usage went up so I'm paging a lot and my 
locality of reference went down so my main memory cache hit rate is 
worse.

Which is better?  I think most people would say version 1 is better.

CPU time is only important in a situation where the system is CPU bound.  
In many real-life cases, that's not at all true.  Things can be memory 
bound.  Or I/O bound (which, when you consider paging, is often the same 
thing as memory bound).  Or lock-contention bound.

Before you starting measuring things, it's usually a good idea to know 
what you want to measure, and why :-)
-- 
http://mail.python.org/mailman/listinfo/python-list


unit-profiling, similar to unit-testing

2011-11-16 Thread Ulrich Eckhardt

Hi!

I'm currently trying to establish a few tests here that evaluate certain 
performance characteristics of our systems. As part of this, I found 
that these tests are rather similar to unit-tests, only that they are 
much more fuzzy and obviously dependent on the systems involved, CPU 
load, network load, day of the week (Tuesday is virus scan day) etc.


What I'd just like to ask is how you do such things. Are there tools 
available that help? I was considering using the unit testing framework, 
but the problem with that is that the results are too hard to interpret 
programmatically and too easy to misinterpret manually. Any suggestions?


Cheers!

Uli

--
http://mail.python.org/mailman/listinfo/python-list


Re: unit-profiling, similar to unit-testing

2011-11-16 Thread Roy Smith
In article 95bcp8-bft@satorlaser.homedns.org,
 Ulrich Eckhardt ulrich.eckha...@dominolaser.com wrote:

 Hi!
 
 I'm currently trying to establish a few tests here that evaluate certain 
 performance characteristics of our systems. As part of this, I found 
 that these tests are rather similar to unit-tests, only that they are 
 much more fuzzy and obviously dependent on the systems involved, CPU 
 load, network load, day of the week (Tuesday is virus scan day) etc.
 
 What I'd just like to ask is how you do such things. Are there tools 
 available that help? I was considering using the unit testing framework, 
 but the problem with that is that the results are too hard to interpret 
 programmatically and too easy to misinterpret manually. Any suggestions?

It's really, really, really hard to either control for, or accurately 
measure, things like CPU or network load.  There's so much stuff you 
can't even begin to see.  The state of your main memory cache.  Disk 
fragmentation.  What I/O is happening directly out of kernel buffers vs 
having to do a physical disk read.  How slow your DNS server is today.

What I suggest is instrumenting your unit test suite to record not just 
the pas/fail status of every test, but also the test duration.  Stick 
these into a database as the tests run.  Over time, you will accumulate 
a whole lot of performance data, which you can then start to mine.

While you're running the tests, gather as much system performance data 
as you can (output of top, vmstat, etc) and stick that into your 
database too.  You never know when you'll want to refer to the data, so 
just collect it all and save it forever.
-- 
http://mail.python.org/mailman/listinfo/python-list


Unit testing beginner question

2011-05-23 Thread Andrius
Hello,

would be gratefull for the explonation.

I did a simple test case:

def setUp(self):
  self.testListNone = None

def testListSlicing(self):
  self.assertRaises(TypeError, self.testListNone[:1])

and I am expecting test to pass, but I am getting exception:
Traceback (most recent call last):
self.assertRaises(TypeError, self.testListNone[:1])
TypeError: 'NoneType' object is unsubscriptable

I thought that assertRaises will pass since TypeError exception will
be raised?

Ta,
Andrius



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing beginner question

2011-05-23 Thread Andrius A
That was quick! Thanks Ian


On 23 May 2011 23:46, Ian Kelly ian.g.ke...@gmail.com wrote:

 On Mon, May 23, 2011 at 4:30 PM, Andrius andriu...@gmail.com wrote:
  and I am expecting test to pass, but I am getting exception:
  Traceback (most recent call last):
 self.assertRaises(TypeError, self.testListNone[:1])
  TypeError: 'NoneType' object is unsubscriptable
 
  I thought that assertRaises will pass since TypeError exception will
  be raised?

 The second argument to assertRaises must be a function that
 assertRaises will call.  assertRaises can't catch the error above
 because it is raised when the argument is evaluated, before
 assertRaises has even been called.

 This would work:

 self.assertRaises(TypeError, lambda: self.testListNone[:1])

 Cheers,
 Ian

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing beginner question

2011-05-23 Thread Ian Kelly
On Mon, May 23, 2011 at 4:30 PM, Andrius andriu...@gmail.com wrote:
 and I am expecting test to pass, but I am getting exception:
 Traceback (most recent call last):
    self.assertRaises(TypeError, self.testListNone[:1])
 TypeError: 'NoneType' object is unsubscriptable

 I thought that assertRaises will pass since TypeError exception will
 be raised?

The second argument to assertRaises must be a function that
assertRaises will call.  assertRaises can't catch the error above
because it is raised when the argument is evaluated, before
assertRaises has even been called.

This would work:

self.assertRaises(TypeError, lambda: self.testListNone[:1])

Cheers,
Ian
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing beginner question

2011-05-23 Thread Roy Smith
In article mailman.1991.1306191316.9059.python-l...@python.org,
 Ian Kelly ian.g.ke...@gmail.com wrote:

 This would work:
 
 self.assertRaises(TypeError, lambda: self.testListNone[:1])

If you're using the version of unittest from python 2.7, there's an even 
nicer way to write this:

with self.assertRaises(TypeError):
self.testListNone[:1]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing multiprocessing code on Windows

2011-02-18 Thread Matt Chaput

On 18/02/2011 2:54 AM, Terry Reedy wrote:

On 2/17/2011 6:31 PM, Matt Chaput wrote:

Does anyone know the right way to write a unit test for code that uses
multiprocessing on Windows?


I would start with Lib/test/test_multiprocessing.


Good idea, but on the one hand it doesn't seem to be doing anything 
special, and on the other hand it seems to do it's own things like not 
having its test cases inherit from unittest.TestCase. I also don't know 
if the Python devs start it with distutils or nosetests, which are the 
ones I'm having a problem with. For example, starting my test suite 
inside PyDev doesn't show the bug.


My test code isn't doing anything unusual... this is pretty much all I 
do to trigger the bug. (None of the imported code has anything to do 
with processes.)



from __future__ import with_statement
import unittest
import random

from whoosh import fields, query
from whoosh.support.testing import TempIndex

try:
import multiprocessing
except ImportError:
multiprocessing = None


if multiprocessing:
class MPFCTask(multiprocessing.Process):
def __init__(self, storage, indexname):
multiprocessing.Process.__init__(self)
self.storage = storage
self.indexname = indexname

def run(self):
ix = self.storage.open_index(self.indexname)
with ix.searcher() as s:
r = s.search(query.Every(), sortedby=key, limit=None)
result = .join([h[key] for h in r])
assert result == 
ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz, result


class TestSorting(unittest.TestCase):
def test_mp_fieldcache(self):
if not multiprocessing:
return

schema = fields.Schema(key=fields.KEYWORD(stored=True))
with TempIndex(schema, mpfieldcache) as ix:
domain = 
list(uabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ)

random.shuffle(domain)
w = ix.writer()
for char in domain:
w.add_document(key=char)
w.commit()

tasks = [MPFCTask(ix.storage, ix.indexname) for _ in xrange(4)]
for task in tasks:
task.start()
for task in tasks:
task.join()


if __name__ == '__main__':
unittest.main()


--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing multiprocessing code on Windows

2011-02-18 Thread Matt Chaput

On 17/02/2011 8:22 PM, phi...@semanchuk.com wrote:


Hi Matt,
I assume you're aware of this documentation, especially the item
entitled Safe importing of main module?

http://docs.python.org/release/2.6.6/library/multiprocessing.html#windows


Yes, but the thing is my code isn't __main__, my unittest classes are
being loaded by setup.py test or nosetests. And while I'm assured
multiprocessing doesn't duplicate the original command line, what I get
sure looks like it, because if I use python setup.py test that command 
seems to be re-run for every Process that starts, but if I use

nosetests then *that* seems to be re-run for every Process.

Matt
--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing multiprocessing code on Windows

2011-02-18 Thread David
Il Thu, 17 Feb 2011 18:31:59 -0500, Matt Chaput ha scritto:

 
 The problem is that with both python setup.py tests and nosetests, 

 Maybe multiprocessing is starting new Windows processes by copying the 
 command line of the current process? But if the command line is 
 nosetests, it's a one way ticket to an infinite explosion of processes.

You can adapt this code to inhibit test execution if another process is
testing yet.
http://code.activestate.com/recipes/474070-creating-a-single-instance-application/

-- 
http://mail.python.org/mailman/listinfo/python-list


Unit testing multiprocessing code on Windows

2011-02-17 Thread Matt Chaput
Does anyone know the right way to write a unit test for code that uses 
multiprocessing on Windows?


The problem is that with both python setup.py tests and nosetests, 
when they get to testing any code that starts Processes they spawn 
multiple copies of the testing suite (i.e. the new processes start 
running tests as if they were started with python setup.py 
tests/nosetests). The test runner in PyDev works properly.


Maybe multiprocessing is starting new Windows processes by copying the 
command line of the current process? But if the command line is 
nosetests, it's a one way ticket to an infinite explosion of processes.


Any thoughts?

Thanks,

Matt
--
http://mail.python.org/mailman/listinfo/python-list


Unit testing multiprocessing code on Windows

2011-02-17 Thread Matt Chaput
Does anyone know the right way to write a unit test for code that uses 
multiprocessing on Windows?


The problem is that with both python setup.py tests and nosetests, 
when they get to a multiprocessing test they spawn multiple copies of 
the testing suite. The test runner in PyDev works properly.


Maybe multiprocessing is starting new Windows processes by copying the 
command line of the current process, but if the command line is 
nosetests, it's a one way ticket to an infinite explosion of processes.


Any thoughts?

Thanks,

Matt
--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing multiprocessing code on Windows

2011-02-17 Thread philip

Quoting Matt Chaput m...@whoosh.ca:

Does anyone know the right way to write a unit test for code that  
uses multiprocessing on Windows?


The problem is that with both python setup.py tests and  
nosetests, when they get to testing any code that starts Processes  
they spawn multiple copies of the testing suite (i.e. the new  
processes start running tests as if they were started with python  
setup.py tests/nosetests). The test runner in PyDev works properly.


Maybe multiprocessing is starting new Windows processes by copying  
the command line of the current process? But if the command line is  
nosetests, it's a one way ticket to an infinite explosion of  
processes.




Hi Matt,
I assume you're aware of this documentation, especially the item  
entitled Safe importing of main module?


http://docs.python.org/release/2.6.6/library/multiprocessing.html#windows


HTH
P

--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing multiprocessing code on Windows

2011-02-17 Thread Terry Reedy

On 2/17/2011 6:31 PM, Matt Chaput wrote:

Does anyone know the right way to write a unit test for code that uses
multiprocessing on Windows?


I would start with Lib/test/test_multiprocessing.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Attest 0.4 released: Modern, Pythonic unit testing

2011-01-09 Thread dag.odenh...@gmail.com
Hello fellow Pythonista,

I just released version 0.4 of Attest, a modern framework for unit testing.

Website and documentation: http://packages.python.org/Attest/
Source code: https://github.com/dag/attest
Issues: https://github.com/dag/attest/issues
PyPI: http://pypi.python.org/pypi/Attest/0.4

The package is a few months old but it is fully tested itself on
Python 2.5-2.7, 3.1 and on PyPy.

Please review and send me feedback!

Dag
-- 
http://mail.python.org/mailman/listinfo/python-announce-list

Support the Python Software Foundation:
http://www.python.org/psf/donations/


Strategies for unit testing an HTTP server.

2010-11-29 Thread Alice Bevan–McGregor

Hello!

Two things are missing from the web server I've been developing before 
I can release 1.0: unit tests and documentation.  Documentation being 
entirely my problem, I've run into a bit of a snag with unit testing; 
just how would you go about it?


Specifically, I need to test things like HTTP/1.0 request/response 
cycles, HTTP/1.0 keep-alives, HTTP/1.1 pipelined reuqest/response 
cycles, HTTP/1.1 connection: close, HTTP/1.1 chunked 
requests/responses, etc.


What is the recommended / best way to test a daemon in Python?  (Note 
that some of the tests need to keep the client socket open.)  Is 
there an easy way to run a daemon in another thread, then kill it after 
running some tests across it?


Even better, are there any dedicated (Python or non-Python) HTTP/1.1 
compliance testing suites?


Thank you for any assistance,

— Alice.


--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing errors (testing the platform module)

2010-04-14 Thread Gabriel Genellina
En Tue, 13 Apr 2010 11:01:19 -0300, John Maclean jaye...@gmail.com  
escribió:



Is there an error in my syntax? Why is my test failing? Line 16.

==
FAIL: platform.__builtins__.blah
--
Traceback (most recent call last):
  File stfu/testing/test_pyfactor.py, line 16, in testplatformbuiltins
self.assertEquals(platform.__builtins__.__class__, type 'dict')
AssertionError: type 'dict' != type 'dict'

--


To express the condition SOMEOBJECT must be a dictionary, use:

isinstance(SOMEOBJECT, dict)

SOMEOBJECT might actually be an instance of any derived class and still  
pass the test; that's usually the desired behavior.


In the rare cases where only a very specific type is allowed, use this  
form instead:


type(SOMEOBJECT) is dict


The test case above should read then:

self.assert_(isinstance(platform.__builtins__, dict),
 type(platform.__builtins__))

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


unit testing, setUp and scoping

2010-04-14 Thread john maclean
Can one use the setUp block to store variables so that they can be
used elsewhere in unit tests? I'm thinking that it's better to have
variables created in another script and have it imported from within
the unit test

#!/usr/bin/env python
'''create knowledge base of strings by unit testing'''
import unittest

class TestPythonStringsTestCase(unittest.TestCase):
def setUp(self):
print '''setting up stuff for ''', __name__
s1 = 'single string'
print dir(str)

def testclass(self):
'''test strings are of class str'''
self.assertEqual(s1.__class__, str)

if __name__ == __main__:
unittest.main()



-- 
John Maclean
07739 171 531
MSc (DIC)

Enterprise Linux Systems Engineer
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing errors (testing the platform module)

2010-04-14 Thread john maclean
On 14 April 2010 09:09, Gabriel Genellina gagsl-...@yahoo.com.ar wrote:
 En Tue, 13 Apr 2010 11:01:19 -0300, John Maclean jaye...@gmail.com
 escribió:

 Is there an error in my syntax? Why is my test failing? Line 16.

 ==
 FAIL: platform.__builtins__.blah
 --
 Traceback (most recent call last):
  File stfu/testing/test_pyfactor.py, line 16, in testplatformbuiltins
    self.assertEquals(platform.__builtins__.__class__, type 'dict')
 AssertionError: type 'dict' != type 'dict'

 --

 To express the condition SOMEOBJECT must be a dictionary, use:

    isinstance(SOMEOBJECT, dict)

 SOMEOBJECT might actually be an instance of any derived class and still pass
 the test; that's usually the desired behavior.

 In the rare cases where only a very specific type is allowed, use this form
 instead:

    type(SOMEOBJECT) is dict


 The test case above should read then:

    self.assert_(isinstance(platform.__builtins__, dict),
                 type(platform.__builtins__))

 --
 Gabriel Genellina

 --
 http://mail.python.org/mailman/listinfo/python-list


This is cool. Thanks for your replies.

self.assertEqual(platform.__builtins__.__class__, dict,
platform.__class__ supposed to be dict)
self.assertEqual(platform.__name__, 'platform' )


-- 
John Maclean
07739 171 531
MSc (DIC)

Enterprise Linux Systems Engineer
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing, setUp and scoping

2010-04-14 Thread Bruno Desthuilliers

john maclean a écrit :

Can one use the setUp block to store variables so that they can be
used elsewhere in unit tests? I'm thinking that it's better to have
variables created in another script and have it imported from within
the unit test


???



#!/usr/bin/env python
'''create knowledge base of strings by unit testing'''
import unittest

class TestPythonStringsTestCase(unittest.TestCase):
def setUp(self):
print '''setting up stuff for ''', __name__
s1 = 'single string'
print dir(str)

def testclass(self):
'''test strings are of class str'''
self.assertEqual(s1.__class__, str)



Err... What about FIRST doing the FineTutorial ???

class TestPythonStringsTestCase(unittest.TestCase):
def setUp(self):
self.s1 = 'single string'

def testclass(self):
test strings are of class str
 self.assert(type(self.s1) is str)


--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing errors (testing the platform module)

2010-04-14 Thread J. Cliff Dyer
On Wed, 2010-04-14 at 15:51 +0100, john maclean wrote:
 self.assertEqual(platform.__builtins__.__class__, dict,
 platform.__class__ supposed to be dict)
 self.assertEqual(platform.__name__, 'platform' ) 

The preferred spelling for:

platform.__builtins__.__class__ 

would be

type(platform.__builtins__)

It's shorter and easier to read, but essentially says the same thing.

You can also use it on integer literals, which you can't do with your
syntax:

 type(1)
type 'int'
 1.__class__
...
SyntaxError: invalid syntax

Admittedly, this is a trivial benefit.  If you're using a literal, you
already know what type you're dealing with.

Cheers,
Cliff

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing, setUp and scoping

2010-04-14 Thread Francisco Souza

 On Wed, Apr 14, 2010 at 11:47 AM, john maclean jaye...@gmail.com wrote:
 Can one use the setUp block to store variables so that they can be
 used elsewhere in unit tests? I'm thinking that it's better to have
 variables created in another script and have it imported from within
 the unit test


Hi John,
each TestCase is a object, and you can store attributes in this objects
normally, using self :)

class TestPythonStringsTestCase(

 unittest.TestCase):
def setUp(self):
print '''setting up stuff for ''', __name__
*self.*s1 = 'single string'
print dir(str)

def testclass(self):
'''test strings are of class str'''
self.assertEqual(*self.*s1.__class__, str)

 if __name__ == __main__:
unittest.main()


This works fine and s1 is an internal attribute of your TesteCase.

Best regards,
Francisco Souza
Software developer at Giran and also full time
Open source evangelist at full time

http://www.franciscosouza.net
Twitter: @franciscosouza
(27) 8128 0652
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing, setUp and scoping

2010-04-14 Thread john maclean
On 14 April 2010 16:22, Francisco Souza franci...@franciscosouza.net wrote:
 On Wed, Apr 14, 2010 at 11:47 AM, john maclean jaye...@gmail.com wrote:
 Can one use the setUp block to store variables so that they can be
 used elsewhere in unit tests? I'm thinking that it's better to have
 variables created in another script and have it imported from within
 the unit test

 Hi John,
 each TestCase is a object, and you can store attributes in this objects
 normally, using self :)

 class TestPythonStringsTestCase(

 unittest.TestCase):
        def setUp(self):
                print '''setting up stuff for ''', __name__
                self.s1 = 'single string'
                print dir(str)

        def testclass(self):
                '''test strings are of class str'''
                self.assertEqual(self.s1.__class__, str)

 if __name__ == __main__:
        unittest.main()

 This works fine and s1 is an internal attribute of your TesteCase.

 Best regards,
 Francisco Souza
 Software developer at Giran and also full time
 Open source evangelist at full time

 http://www.franciscosouza.net
 Twitter: @franciscosouza
 (27) 8128 0652

 --
 http://mail.python.org/mailman/listinfo/python-list




Thanks! that worked.


-- 
John Maclean
07739 171 531
MSc (DIC)

Enterprise Linux Systems Engineer
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing errors (testing the platform module)

2010-04-14 Thread Terry Reedy

On 4/14/2010 11:19 AM, J. Cliff Dyer wrote:

On Wed, 2010-04-14 at 15:51 +0100, john maclean wrote:

self.assertEqual(platform.__builtins__.__class__, dict,
platform.__class__ supposed to be dict)
self.assertEqual(platform.__name__, 'platform' )


The preferred spelling for:

 platform.__builtins__.__class__

would be

 type(platform.__builtins__)


Agreed


It's shorter and easier to read, but essentially says the same thing.

You can also use it on integer literals, which you can't do with your
syntax:

   type(1)
 type 'int'
   1.__class__
 ...
 SyntaxError: invalid syntax


Add the needed space and it works fine.

 1 .__class__
class 'int'

A possible use of literal int attributes is for bound mehods:

 inc = 1 .__add__
 inc(3)
4
 inc(3.0)
NotImplemented

Whereas def inc(n): return n+1 is generic and would return 4.0.

Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Unit testing errors (testing the platform module)

2010-04-13 Thread John Maclean
I normally use  languages unit testing framework to get a better
understanding of how a language works. Right now I want to grok the
platform module;


 1 #!/usr/bin/env python
  2 '''a pythonic factor'''
  3 import unittest
  4 import platform
  5
  6 class TestPyfactorTestCase(unittest.TestCase):
  7 def setUp(self):
  8 '''setting up stuff'''
 13
 14 def testplatformbuiltins(self): 15
'''platform.__builtins__.blah '''
 16 self.assertEquals(platform.__builtins__.__class__, type 'd
ict')
 17
 18
 19 def tearDown(self):
 20 print 'cleaning stuff up'
 21
 22 if __name__ == __main__:
 23 unittest.main()


Is there an error in my syntax? Why is my test failing? Line 16.


python stfu/testing/test_pyfactor.py
Fcleaning stuff up

==
FAIL: platform.__builtins__.blah
--
Traceback (most recent call last):
  File stfu/testing/test_pyfactor.py, line 16, in testplatformbuiltins
self.assertEquals(platform.__builtins__.__class__, type 'dict')
AssertionError: type 'dict' != type 'dict'

--
Ran 1 test in 0.000s

FAILED (failures=1)

-- 
John Maclean MSc. (DIC) Bsc. (Hons),Core Linux Systems Engineering,07739
171 531
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing errors (testing the platform module)

2010-04-13 Thread Benjamin Kaplan
On Tue, Apr 13, 2010 at 10:01 AM, John Maclean jaye...@gmail.com wrote:

 I normally use  languages unit testing framework to get a better
 understanding of how a language works. Right now I want to grok the
 platform module;


  1 #!/usr/bin/env python
  2 '''a pythonic factor'''
  3 import unittest
  4 import platform
  5
  6 class TestPyfactorTestCase(unittest.TestCase):
  7 def setUp(self):
  8 '''setting up stuff'''
  13
  14 def testplatformbuiltins(self): 15
 '''platform.__builtins__.blah '''
  16 self.assertEquals(platform.__builtins__.__class__, type 'd
ict')
  17
  18
  19 def tearDown(self):
  20 print 'cleaning stuff up'
  21
  22 if __name__ == __main__:
  23 unittest.main()


 Is there an error in my syntax? Why is my test failing? Line 16.


Because you are checking if the type object dict is equal to the str object
type 'dict'. A type object will never compare equal to a str object,
even though the string representation of them is the same.

 type({}) == type 'dict'
False
 type({})
type 'dict'
 str(type({})) == type 'dict'
True
 type({}) == dict
True
 platform.__builtins__.__class__ == dict
True





 python stfu/testing/test_pyfactor.py
 Fcleaning stuff up

 ==
 FAIL: platform.__builtins__.blah
 --
 Traceback (most recent call last):
  File stfu/testing/test_pyfactor.py, line 16, in testplatformbuiltins
self.assertEquals(platform.__builtins__.__class__, type 'dict')
 AssertionError: type 'dict' != type 'dict'

 --
 Ran 1 test in 0.000s

 FAILED (failures=1)

 --
 John Maclean MSc. (DIC) Bsc. (Hons),Core Linux Systems Engineering,07739
 171 531
 --
 http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing errors (testing the platform module)

2010-04-13 Thread Martin P. Hellwig

On 04/13/10 15:01, John Maclean wrote:

I normally use  languages unit testing framework to get a better
understanding of how a language works. Right now I want to grok the
platform module;


  1 #!/usr/bin/env python
   2 '''a pythonic factor'''
   3 import unittest
   4 import platform
   5
   6 class TestPyfactorTestCase(unittest.TestCase):
   7 def setUp(self):
   8 '''setting up stuff'''
  13
  14 def testplatformbuiltins(self): 15
'''platform.__builtins__.blah '''
  16 self.assertEquals(platform.__builtins__.__class__, type 'd
 ict')
  17
  18
  19 def tearDown(self):
  20 print 'cleaning stuff up'
  21
  22 if __name__ == __main__:
  23 unittest.main()


Is there an error in my syntax? Why is my test failing? Line 16.


python stfu/testing/test_pyfactor.py
Fcleaning stuff up

==
FAIL: platform.__builtins__.blah
--
Traceback (most recent call last):
   File stfu/testing/test_pyfactor.py, line 16, in testplatformbuiltins
 self.assertEquals(platform.__builtins__.__class__, type 'dict')
AssertionError:type 'dict'  != type 'dict'

--
Ran 1 test in 0.000s

FAILED (failures=1)



What happens if you change this line:
self.assertEquals(platform.__builtins__.__class__, type 'dict')

To something like:
self.assertEquals(platform.__builtins__.__class__, type(dict()))

or
self.assertEquals(str(platform.__builtins__.__class__), type 'dict')

--
mph

--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing errors (testing the platform module)

2010-04-13 Thread MRAB

John Maclean wrote:

I normally use  languages unit testing framework to get a better
understanding of how a language works. Right now I want to grok the
platform module;


 1 #!/usr/bin/env python
  2 '''a pythonic factor'''
  3 import unittest
  4 import platform
  5
  6 class TestPyfactorTestCase(unittest.TestCase):
  7 def setUp(self):
  8 '''setting up stuff'''
 13
 14 def testplatformbuiltins(self): 15
'''platform.__builtins__.blah '''
 16 self.assertEquals(platform.__builtins__.__class__, type 'd
ict')
 17
 18
 19 def tearDown(self):
 20 print 'cleaning stuff up'
 21
 22 if __name__ == __main__:
 23 unittest.main()


Is there an error in my syntax? Why is my test failing? Line 16.


python stfu/testing/test_pyfactor.py
Fcleaning stuff up

==
FAIL: platform.__builtins__.blah
--
Traceback (most recent call last):
  File stfu/testing/test_pyfactor.py, line 16, in testplatformbuiltins
self.assertEquals(platform.__builtins__.__class__, type 'dict')
AssertionError: type 'dict' != type 'dict'

--
Ran 1 test in 0.000s

FAILED (failures=1)


platform.__builtins__.__class__ returns a dict, which is not the same as
type 'dict', a string.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing errors (testing the platform module)

2010-04-13 Thread J. Cliff Dyer
The problem is that the class of platform.__builtins__ is a dict, not a
string containing the text type 'dict'.  

Try replacing line 16 with this:

self.assertEqual(type(platform.__builtins__), dict)

Cheers,
Cliff


On Tue, 2010-04-13 at 15:01 +0100, John Maclean wrote:
 I normally use  languages unit testing framework to get a better
 understanding of how a language works. Right now I want to grok the
 platform module;
 
 
  1 #!/usr/bin/env python
   2 '''a pythonic factor'''
   3 import unittest
   4 import platform
   5
   6 class TestPyfactorTestCase(unittest.TestCase):
   7 def setUp(self):
   8 '''setting up stuff'''
  13
  14 def testplatformbuiltins(self): 15
 '''platform.__builtins__.blah '''
  16 self.assertEquals(platform.__builtins__.__class__, type 'd
 ict')
  17
  18
  19 def tearDown(self):
  20 print 'cleaning stuff up'
  21
  22 if __name__ == __main__:
  23 unittest.main()
 
 
 Is there an error in my syntax? Why is my test failing? Line 16.
 
 
 python stfu/testing/test_pyfactor.py
 Fcleaning stuff up
 
 ==
 FAIL: platform.__builtins__.blah
 --
 Traceback (most recent call last):
   File stfu/testing/test_pyfactor.py, line 16, in testplatformbuiltins
 self.assertEquals(platform.__builtins__.__class__, type 'dict')
 AssertionError: type 'dict' != type 'dict'
 
 --
 Ran 1 test in 0.000s
 
 FAILED (failures=1)
 
 -- 
 John Maclean MSc. (DIC) Bsc. (Hons),Core Linux Systems Engineering,07739
 171 531


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing a routine that sends mail

2010-02-19 Thread Bruno Desthuilliers

commander_coder a écrit :

Hello,

I have a routine that sends an email (this is how a Django view
notifies me that an event has happened).  I want to unit test that
routine.  


http://docs.djangoproject.com/en/dev/topics/email/#e-mail-backends

Or if you're stuck with 1.x  1.2a, you could just mock the send_mail 
function to test that your app does send the appropriate mail - which is 
what you really want to know.


My 2 cents...
--
http://mail.python.org/mailman/listinfo/python-list


unit testing a routine that sends mail

2010-02-18 Thread commander_coder
Hello,

I have a routine that sends an email (this is how a Django view
notifies me that an event has happened).  I want to unit test that
routine.  So I gave each mail a unique subject line and I want to use
python's mailbox package to look for that subject.  But sometimes the
mail gets delivered and sometimes it does not (I use sendmail to send
it but I believe that it is always sent since I can see an entry in
the mail log).

I thought that the mail delivery system was occasionally hitting my
mailbox lock, and that I needed to first sleep for a while.  So I
wrote a test that sleeps, then grabs the mailbox and looks through it,
and if the mail is not there then it sleeps again, etc., for up to ten
tries.  It is below.

However, I find that if the mail is not delivered on the first try
then it is never delivered, no matter how long I wait.  So I think I
am doing mailbox wrong but I don't see how.

The real puzzler for me is that the test reliably fails every third
time.  For instance, if I try it six times then it succeeds the first,
second, fourth, and fifth times.  I have to say that I cannot
understand this at all but it certainly makes the unit test useless.

I'm using Python 2.6 on an Ubuntu system.  If anyone could give me a
pointer to why the mail is not delivered, I sure could use it.

Thanks,
Jim

...

class sendEmail_test(unittest.TestCase):
Test sendEmail()

config = ConfigParser.ConfigParser()
config.read(CONFIG_FN)
mailbox_fn=config.get('testing','mailbox')

def look_in_mailbox(self,uniquifier='123456789'):
If the mailbox has a message whose subject line contains
the
given uniquifier then it returns that message and deletes it.
Otherwise, returns None.

sleep_time=10.0  # wait for message to be delivered?
message_found=None
i=0
while i10 and not(message_found):
time.sleep(sleep_time)
m=mailbox.mbox(self.mailbox_fn)
try:
m.lock()
except Exception, err:
print trouble locking the mailbox: +str(err)
try:
for key,message in m.items():
subject=message['Subject'] or ''
print subject is ,subject
if subject.find(uniquifier)-1:
print +++found the message+++ i=,str(i)
message_found=message
m.remove(key)
break
m.flush()
except Exception, err:
print trouble reading from the mailbox: +str(err)
m.unlock()
try:
m.unlock()
except Exception, err:
print trouble unlocking the mailbox: +str(err)
try:
m.close()
except Exception, err:
print trouble closing the mailbox: +str(err)
del m
i+=1
return message_found

def test_mailbox(self):
random.seed()
uniquifier=str(int(random.getrandbits(20)))
print uniquifier is ,uniquifier  # looks different every
time to me
rc=az_view.sendEmail(uniquifier=uniquifier)
if rc:
self.fail('rc is not None: '+str(rc))
found=self.look_in_mailbox(uniquifier)
if not(found):
self.fail('message not found')
print done
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing a routine that sends mail

2010-02-18 Thread commander_coder
On Feb 18, 9:55 am, Roy Smith r...@panix.com wrote:
 Just a wild guess here, but maybe there's some DNS server which
 round-robins three address records for some hostname you're using, one of
 which is bogus.

 I've seen that before, and this smells like the right symptoms.

Everything happens on my laptop, localhost, where I'm developing.

I'm sorry; I wasn't sure how much information was too much (or too
little) but I should have said that.

Thank you.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing a routine that sends mail

2010-02-18 Thread commander_coder
On Feb 18, 10:27 am, Bruno Desthuilliers bruno.
42.desthuilli...@websiteburo.invalid wrote:

 you could just mock the send_mail
 function to test that your app does send the appropriate mail - which is
 what you really want to know.

That's essentially what I think I am doing.

I need to send a relatively complex email, multipart, with both a
plain text and html versions of a message, and some attachments.  I am
worried that the email would fail to get out for some reason (say,
attaching the html message fails), so I want to test it.  I started
out with a simple email, thinking to get unit tests working and then
as I add stuff to the email I'd run the tests to see if it broke
anything.

I could mock the send_mail function by having it print to the screen
mail sent or I could have a unit test that looked for the mail.  I
did the first, and it worked fine.  I thought to do the second,
starting with a unit test checking simply that the message got to the
mailbox.  But I am unable to check that, as described in the original
message.

Jim
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing a routine that sends mail

2010-02-18 Thread commander_coder
Bruno,  I talked to someone who explained to me how what you said
gives a way around my difficulty.  Please ignore the other reply.
I'll do what you said.  Thank you; I appreciate your help.

Jim
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unit testing a routine that sends mail

2010-02-18 Thread Phlip
commander_coder wrote:

 I have a routine that sends an email (this is how a Django view
 notifies me that an event has happened).  I want to unit test that
 routine.

Are you opening SMTP and POP3 sockets??

If you are not developing that layer itself, just use Django's built-
in mock system. Here's my favorite assertion for it:

def assert_mail(self, funk):
from django.core import mail
previous_mails = len(mail.outbox)
funk()
mails = mail.outbox[ previous_mails : ]
assert [] != mails, 'the called block should produce emails'
if len(mails) == 1:  return mails[0]
return mails

You call it a little bit like this:

missive = self.assert_mail( lambda:
mark_order_as_shipped(order) )

Then you can use assert_contains on the missive.body, to see what mail
got generated. That's what you are actually developing, so your tests
should skip any irrelevant layers, and only test the layers where you
yourself might add bugs.

--
  Phlip
  http://penbird.tumblr.com/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Replacing module with a stub for unit testing

2009-05-29 Thread s4g
Try

import sys
import ExpensiveModuleStub

sys.modules['ExpensiveModule'] = ExpensiveModuleStub
sys.modules['ExpensiveModule'].__name__ = 'ExpensiveModule'

Should do the trick


--
Vyacheslav
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Replacing module with a stub for unit testing

2009-05-24 Thread A. Cavallo
how about the old and simple:

import ExpensiveModuleStub as ExpensiveModule

On a different league you could make use of decorator and creating caching
objects but that depends entirely on the requirements (how strict your test 
must be, test data sizes involved and more, much more details).

Regards,
Antonio



On Saturday 23 May 2009 14:00:15 pigmart...@gmail.com wrote:
 Hi,

 I'm working on a unit test framework for a module.  The module I'm
 testing indirectly calls another module which is expensive to access
 --- CDLLs whose functions access a database.

 test_MyModule ---MyModule---IntermediateModule---

 ExpensiveModule

 I want to create a stub of ExpensiveModule and have that be accessed
 by IntermediateModule instead of the real version

 test_MyModule ---MyModule---IntermediateModule---

 ExpensiveModuleStub

 I tried the following in my unittest:

 import ExpensiveModuleStub
 sys.modules['ExpensiveModule'] = ExpensiveModuleStub # Doesn't
 work

 But, import statements in the IntermediateModule still access the real
 ExpensiveModule, not the stub.

 The examples I can find of creating and using Mock or Stub objects
 seem to all follow a pattern where the fake objects are passed in as
 arguments to the code being tested.  For example, see the Example
 Usage section here: http://python-mock.sourceforge.net.  But that
 doesn't work in my case as the module I'm testing doesn't directly use
 the module that I want to replace.

 Can anybody suggest something?

 Thanks,

 Scott

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Replacing module with a stub for unit testing

2009-05-24 Thread Steven D'Aprano
On Sun, 24 May 2009 13:14:30 +0100, A. Cavallo wrote:

 how about the old and simple:
 
 import ExpensiveModuleStub as ExpensiveModule

No, that won't do, because for it to have the desired effort, it needs to 
be inside the IntermediateModule, not the Test_Module. That means that 
IntermediateModule needs to know if it is running in test mode or real 
mode, and that's (1) possibly impractical, and (2) not great design.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Replacing module with a stub for unit testing

2009-05-23 Thread pigmartian
Hi,

I'm working on a unit test framework for a module.  The module I'm
testing indirectly calls another module which is expensive to access
--- CDLLs whose functions access a database.

test_MyModule ---MyModule---IntermediateModule---
ExpensiveModule

I want to create a stub of ExpensiveModule and have that be accessed
by IntermediateModule instead of the real version

test_MyModule ---MyModule---IntermediateModule---
ExpensiveModuleStub

I tried the following in my unittest:

import ExpensiveModuleStub
sys.modules['ExpensiveModule'] = ExpensiveModuleStub # Doesn't
work

But, import statements in the IntermediateModule still access the real
ExpensiveModule, not the stub.

The examples I can find of creating and using Mock or Stub objects
seem to all follow a pattern where the fake objects are passed in as
arguments to the code being tested.  For example, see the Example
Usage section here: http://python-mock.sourceforge.net.  But that
doesn't work in my case as the module I'm testing doesn't directly use
the module that I want to replace.

Can anybody suggest something?

Thanks,

Scott
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Replacing module with a stub for unit testing

2009-05-23 Thread Steven D'Aprano
On Sat, 23 May 2009 06:00:15 -0700, pigmartian wrote:

 Hi,
 
 I'm working on a unit test framework for a module.  The module I'm
 testing indirectly calls another module which is expensive to access ---
 CDLLs whose functions access a database.
...
 The examples I can find of creating and using Mock or Stub objects seem
 to all follow a pattern where the fake objects are passed in as
 arguments to the code being tested.  For example, see the Example
 Usage section here: http://python-mock.sourceforge.net.  But that
 doesn't work in my case as the module I'm testing doesn't directly use
 the module that I want to replace.
 
 Can anybody suggest something?

Sounds like a job for monkey-patching!

Currently, you have this:

# inside test_MyModule:
import MyModule
test_code()


# inside MyModule:
import IntermediateModule


# inside IntermediateModule:
import ExpensiveModule


You want to leave MyModule and IntermediateModule as they are, but 
replace ExpensiveModule with MockExpensiveModule. Try this:

# inside test_MyModule:
import MyModule
import MockExpensiveModule
MyModule.IntermediateModule.ExpensiveModule = MockExpensiveModule
test_code()



That should work, unless IntermediateModule uses from ExpensiveModule 
import functions instead of import ExpensiveModule. In that case, you 
will have to monkey-patch each individual object rather than the entire 
module:

MyModule.IntermediateModule.afunc = MockExpensiveModule.afunc
MyModule.IntermediateModule.bfunc = MockExpensiveModule.bfunc
MyModule.IntermediateModule.cfunc = MockExpensiveModule.cfunc
# etc...



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Replacing module with a stub for unit testing

2009-05-23 Thread Ben Finney
pigmart...@gmail.com writes:

 
 import ExpensiveModuleStub
 sys.modules['ExpensiveModule'] = ExpensiveModuleStub # Doesn't
 work
 
 But, import statements in the IntermediateModule still access the real
 ExpensiveModule, not the stub.
 
 The examples I can find of creating and using Mock or Stub objects
 seem to all follow a pattern where the fake objects are passed in as
 arguments to the code being tested. For example, see the Example
 Usage section here: http://python-mock.sourceforge.net. But that
 doesn't work in my case as the module I'm testing doesn't directly use
 the module that I want to replace.
 
 Can anybody suggest something?

The ‘MiniMock’ library URL:http://pypi.python.org/pypi/MiniMock
addresses this by mocking objects via namespace.

If you know the code under test will import ‘spam.eggs.beans’, import
that yourself in your unit test module, then mock the object with
‘minimock.mock('spam.eggs.beans')’. Whatever object was at that name
will be mocked (until ‘minimock.restore()’), and other code referring to
that name will get the mocked object.

-- 
 \ “When I turned two I was really anxious, because I'd doubled my |
  `\   age in a year. I thought, if this keeps up, by the time I'm six |
_o__)  I'll be ninety.” —Steven Wright |
Ben Finney
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing frameworks

2009-03-30 Thread Alexander Draeger

Hi,

I'm work on a testing framework for Python. Until now I have
implemented the main features of PyUnit and JUnit 4.x. I like the
annotation syntax of JUnit 4.x and it's theory concept is great
therefore you can imagine how my framework will be.

I plan a lot of additionally features which are neither part of Junit
4.5 nor PyUnit. Finding testcases automatically is a good idea.

Alex
--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing frameworks

2009-03-25 Thread Fabio Zadrozny
Hi Andew,

 not exactly a framework, but useful while working on small projects - you
 can run tests from inside eclipse (using the pydev plugin for python).
 it's easy to run all tests or some small subset (although it is a bit
 buggy for 3.0).

What exactly is not working with 3.0? (couldn't find any related bug
report on that).

Cheers,

Fabio
--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing frameworks

2009-03-25 Thread andrew cooke
Fabio Zadrozny wrote:
 not exactly a framework, but useful while working on small projects -
 you
 can run tests from inside eclipse (using the pydev plugin for python).
 it's easy to run all tests or some small subset (although it is a bit
 buggy for 3.0).

 What exactly is not working with 3.0? (couldn't find any related bug
 report on that).


i don't have the project open at the moment, but iirc you cannot run all
modules in a package by clicking only on the package.  actually, that may
be 2.6.  let me check...


ok, so i have a project that works with 2.6 or 3.0, and i can swap between
those by selecting the appropriate interpreter in pydev's preferences (i
describe this just in case it is relevant).

in 3.0, i can run an individual test module (single file) by right
clicking and selecting run as... and python unit test.  i can select
all modules in a packages by shift-clicking (ie selecting all modules) and
doing the same.

but in 3.0 i cannot click on a package (directory), run it as a test, and
so run all test modules inside the package.  not even the top src
directory.  the error is:

Traceback (most recent call last):
  File
/home/andrew/pkg/eclipse/plugins/org.python.pydev.debug_1.4.0/pysrc/runfiles.py,
line 253, in module
PydevTestRunner(dirs, test_filter, verbosity).run_tests()
  File
/home/andrew/pkg/eclipse/plugins/org.python.pydev.debug_1.4.0/pysrc/runfiles.py,
line 234, in run_tests
files = self.find_import_files()
  File
/home/andrew/pkg/eclipse/plugins/org.python.pydev.debug_1.4.0/pysrc/runfiles.py,
line 148, in find_import_files
Finding files...
os.path.walk(base_dir, self.__add_files, pyfiles)
AttributeError: 'module' object has no attribute 'walk'


now, in contrast, in 2.6, i can do all the above *except* intermediate
packages (which may be just the way modules work in python - looks like it
cannot import from unselected modules).

this is with a homebuilt 3.0 - Python 3.0 (r30:67503, Jan 16 2009,
06:50:19) and opensuse's default 2.6 - Python 3.0 (r30:67503, Jan 16 2009,
06:50:19) - on Eclipse 3.3.2 with pydev 1.4.0

sorry for not reporting a bug - i assumed you'd know (and the workarounds
described above meant i wasn't stalled).

i also have eclipse 3.4.2 with pydev 1.4.4.2636 on a separate machine (ie
new versions), and i can try there if you want (it will take a while to
get the source there, but is not a problem).

andrew


--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing frameworks

2009-03-25 Thread andrew cooke

copy+paste error; the correct Python2.6 details are:

Python 2.6 (r26:66714, Feb  3 2009, 20:49:49)


andrew cooke wrote:
 this is with a homebuilt 3.0 - Python 3.0 (r30:67503, Jan 16 2009,
 06:50:19) and opensuse's default 2.6 - Python 3.0 (r30:67503, Jan 16 2009,
 06:50:19) - on Eclipse 3.3.2 with pydev 1.4.0


--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing frameworks

2009-03-25 Thread grkuntzmd
In unittest, has anyone used the *NIX command find to automatically
build a test suite file of all tests under a specified directory?

I generally name my tests as _Test_ORIGINAL_MODULE_NAME.py where
ORIGINAL_MODULE_NAME is the obvious value. This way, I can include/
exclude them from deployments, etc. in my Makefile based on filename
patterns. I was thinking of  doing something with find to get a list
of test file names and then run them through a Python script to
produce a top-level suite file, probably as the first step in my
Makefile test target.

Any thoughts?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing frameworks

2009-03-25 Thread Fabio Zadrozny
 sorry for not reporting a bug - i assumed you'd know (and the workarounds
 described above meant i wasn't stalled).

 i also have eclipse 3.4.2 with pydev 1.4.4.2636 on a separate machine (ie
 new versions), and i can try there if you want (it will take a while to
 get the source there, but is not a problem).

No need... I can probably reproduce it easily here. I've added the bug
report (should be fixed for the next release:
https://sourceforge.net/tracker/?func=detailaid=2713178group_id=85796atid=577329
)

Cheers,

Fabio
--
http://mail.python.org/mailman/listinfo/python-list


Unit testing frameworks

2009-03-24 Thread grkuntzmd
I am looking for a unit testing framework for Python. I am aware of
nose, but was wondering if there are any others that will
automatically find and run all tests under a directory hierarchy.

Thanks, Ralph
--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing frameworks

2009-03-24 Thread Jean-Paul Calderone

On Tue, 24 Mar 2009 05:06:47 -0700 (PDT), grkunt...@gmail.com wrote:

I am looking for a unit testing framework for Python. I am aware of
nose, but was wondering if there are any others that will
automatically find and run all tests under a directory hierarchy.


One such tool is trial, http://twistedmatrix.com/trac/wiki/TwistedTrial

Jean-Paul
--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing frameworks

2009-03-24 Thread andrew cooke
grkunt...@gmail.com wrote:
 I am looking for a unit testing framework for Python. I am aware of
 nose, but was wondering if there are any others that will
 automatically find and run all tests under a directory hierarchy.

not exactly a framework, but useful while working on small projects - you
can run tests from inside eclipse (using the pydev plugin for python). 
it's easy to run all tests or some small subset (although it is a bit
buggy for 3.0).

andrew


--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing frameworks

2009-03-24 Thread Maxim Khitrov
On Tue, Mar 24, 2009 at 8:06 AM,  grkunt...@gmail.com wrote:
 I am looking for a unit testing framework for Python. I am aware of
 nose, but was wondering if there are any others that will
 automatically find and run all tests under a directory hierarchy.

Have you already looked at the unittest module? Below is the code I
use for one of my current projects to load all test cases in package.
This code is sitting in __init__.py, and the test cases are in
separate files (util.py, util_threading.py, etc.). Those files can
contain as many TestCase classes as needed, all are loaded with
loadTestsFromModule. You could easily modify this code to
automatically generate the modules list if you want to.

# repo/pypaq/test/__init__.py
from unittest import TestSuite, defaultTestLoader

import logging
import sys

__all__ = ['all_tests']
modules = ['util', 'util_buffer', 'util_event', 'util_threading']

if not __debug__:
raise RuntimeError('test suite must be executed in debug mode')

all_tests = []

for name in modules:
module = __import__('pypaq.test', globals(), locals(), [name], 0)
tests  = defaultTestLoader.loadTestsFromModule(getattr(module, name))

__all__.append(name)
all_tests.append(tests)
setattr(sys.modules[__name__], name, tests)

logging.getLogger().setLevel(logging.INFO)
all_tests = TestSuite(all_tests)

I then have test_pypaq.py file under repo/, with which I can execute
all_tests or only the tests from a specific module:

# repo/test_pypaq.py
from unittest import TextTestRunner
from pypaq.test import *

TextTestRunner(verbosity=2).run(all_tests)

- Max
--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing frameworks

2009-03-24 Thread pruebauno
On Mar 24, 8:06 am, grkunt...@gmail.com wrote:
 I am looking for a unit testing framework for Python. I am aware of
 nose, but was wondering if there are any others that will
 automatically find and run all tests under a directory hierarchy.

 Thanks, Ralph

*Nose
*Trial
*py.test
--
http://mail.python.org/mailman/listinfo/python-list


Re: Unit testing frameworks

2009-03-24 Thread Gabriel Genellina

En Tue, 24 Mar 2009 09:06:47 -0300, grkunt...@gmail.com escribió:


I am looking for a unit testing framework for Python. I am aware of
nose, but was wondering if there are any others that will
automatically find and run all tests under a directory hierarchy.


All known testing tools (and some unknown too):

http://pycheesecake.org/wiki/PythonTestingToolsTaxonomy

--
Gabriel Genellina

--
http://mail.python.org/mailman/listinfo/python-list


Re: Exhaustive Unit Testing

2008-11-30 Thread Roel Schroeven

Steven D'Aprano schreef:

[..]


Thank you for elaborate answer, Steven. I think I'm really starting to 
get it now.


--
The saddest aspect of life right now is that science gathers knowledge
faster than society gathers wisdom.
  -- Isaac Asimov

Roel Schroeven
--
http://mail.python.org/mailman/listinfo/python-list


Re: Exhaustive Unit Testing

2008-11-30 Thread James Harris
On 27 Nov, 16:32, Emanuele D'Arrigo [EMAIL PROTECTED] wrote:
 On Nov 27, 5:00 am, Steven D'Aprano

 [EMAIL PROTECTED] wrote:
  Refactor until your code is simple enough to unit-test effectively, then
  unit-test effectively.

 Ok, I've taken this wise suggestion on board and of course I found
 immediately ways to improve the method. -However- this generates
 another issue. I can fragment the code of the original method into one
 public method and a few private support methods. But this doesn't
 reduce the complexity of the testing because the number and complexity
 of the possible path stays more or less the same. The solution to this
 would be to test the individual methods separately, but is the only
 way to test private methods in python to make them (temporarily) non
 private? I guess ultimately this would only require the removal of the
 appropriate double-underscores followed by method testing and then
 adding the double-underscores back in place. There is no cleaner
 way, is there?

Difficult to say without seeing the code. You could post it, perhaps.
On the other hand a general recommendation from Programming Pearls
(Jon Bentley) is to convert code to data structures. Maybe you could
convert some of the code to decision tables or similar.

James
--
http://mail.python.org/mailman/listinfo/python-list


Re: Exhaustive Unit Testing

2008-11-29 Thread Roel Schroeven

Fuzzyman schreef:

By the way, to reduce the number of independent code paths you need to
test you can use mocking. You only need to test the logic inside the
methods you create (testing behaviour), and not every possible
combination of paths.


I don't understand that. This is part of something I've never understood 
about unit testing, and each time I try to apply unit testing I bump up 
against, and don't know how to resolve. I find it also difficult to 
explain exactly what I mean.


Suppose I need to write method spam() that turns out to be somewhat 
complex, like the class method Emanuele was talking about. When I try to 
write test_spam() before the method, I have no way to know that I'm 
going to need so many code paths, and that I'm going to split the code 
out into a number of other functions spam_ham(), spam_eggs(), etc.


So whatever happens, I still have to test spam(), however many codepaths 
it contains? Even if it only contains a few lines with fors and ifs and 
calls to the other functions, it still needs to be tested? Or not? From 
a number of postings in this thread a get the impression (though that 
might be an incorrect interpretation) that many people are content to 
only test the various helper functions, and not the spam() itself. You 
say you don't have to test every possible combination of paths, but how 
thorough is your test suite if you have untested code paths?


A related matter (at least in my mind) is this: after I've written 
test_spam() but before spam() is correctly working, I find out that I 
need to write spam_ham() and spam_eggs(), so I need test_spam_ham() and 
test_spam_eggs(). That means that I can never have a green light while 
coding test_spam_ham() and test_stam_eggs(), since test_spam() will 
fail. That feels wrong. And this is a simple case; I've seen cases where 
I've created various new classes in order to write one new function. 
Maybe I shouldn't care so much about the failing unit test? Or perhaps I 
should turn test_spam() of while testing test_spam_ham() and 
test_spam_eggs().


I've read test-driven development by David Astels, but somehow it 
seems that the issues I encounter in practice don't come up in his examples.


--
The saddest aspect of life right now is that science gathers knowledge
faster than society gathers wisdom.
  -- Isaac Asimov

Roel Schroeven
--
http://mail.python.org/mailman/listinfo/python-list


Re: Exhaustive Unit Testing

2008-11-29 Thread Steven D'Aprano
On Sat, 29 Nov 2008 11:36:56 +0100, Roel Schroeven wrote:

 Fuzzyman schreef:
 By the way, to reduce the number of independent code paths you need to
 test you can use mocking. You only need to test the logic inside the
 methods you create (testing behaviour), and not every possible
 combination of paths.
 
 I don't understand that. This is part of something I've never understood
 about unit testing, and each time I try to apply unit testing I bump up
 against, and don't know how to resolve. I find it also difficult to
 explain exactly what I mean.
 
 Suppose I need to write method spam() that turns out to be somewhat
 complex, like the class method Emanuele was talking about. When I try to
 write test_spam() before the method, I have no way to know that I'm
 going to need so many code paths, and that I'm going to split the code
 out into a number of other functions spam_ham(), spam_eggs(), etc.
 
 So whatever happens, I still have to test spam(), however many codepaths
 it contains? Even if it only contains a few lines with fors and ifs and
 calls to the other functions, it still needs to be tested? Or not? 

The first thing to remember is that it is impractical for unit tests to 
be exhaustive. Consider the following trivial function:

def add(a, b):  # a and b ints only
return a+b+1

Clearly you're not expected to test *every imaginable* path through this 
function (ignoring unit tests for error handling and bad input):

assert add(0, 0) == 1
assert add(1, 0) == 2
assert add(2, 0) == 3
assert add(3, 0) == 4
...
assert add(99736263, 8264891001) = 8364627265
...

Instead, your tests for add() can rely on the + operator being 
sufficiently tested that you can trust it, and so you only need to test 
the logic of your function. To do that, it would be sufficient to test a 
relatively small representative sample of data. One test would probably 
be sufficient:

assert add(1, 3) == 5

That test would detect almost all bugs in the function, although of 
course it won't detect every imaginable bug. A second test will make the 
chances of such false negatives virtually disappear.

Now imagine a more complicated function:

def spam(a, b):
return spam_eggs(a, b) + spam_ham(a) - 2*spam_tomato(b)

Suppose spam_eggs has four paths that need testing (paths A, B, C, D), 
spam_ham and spam_tomato have two each (E F and G H), and let's assume 
that they are all independent. Then your spam unit tests need to test 
every path:

A E G
A E H
A F G
A F H
B E G
B E H
...
D F H

for a total of 4*2*2=16 paths, in the spam unit tests.

But suppose that we have tested spam_eggs independently. It has four 
paths, so we need four tests to cover them all. Now our spam testing can 
assume that spam_eggs is correct, in the same way that we earlier assumed 
that the plus operator was correct, and reduce the number of tests to a 
small set of representative data.

No matter which path through spam_eggs we take, we can trust the result, 
because we have unit tests that will fail if spam_eggs has a bug. So 
instead of testing every path, I choose a much more limited set:

A E G
A E H
A F G
A F H

I arbitrarily choose path A alone, confident that paths B C and D are 
correct, but of course I could make other choices. There's no need to 
test paths B C and D *within spam's unit tests*, because they are already 
tested elsewhere. To test them again within spam doesn't gain me anything.

Consequently, we reduce our total number of tests from 16 to 8 (four 
tests for spam, four for spam_eggs).


 From
 a number of postings in this thread a get the impression (though that
 might be an incorrect interpretation) that many people are content to
 only test the various helper functions, and not the spam() itself. You
 say you don't have to test every possible combination of paths, but how
 thorough is your test suite if you have untested code paths?

The success of this tactic assumes that you can identify code paths and 
make them independent. If they are dependent, then you can't be sure that 
path E G after A is the same as E G after D.

Real world example: compare driving your car from home to the mall to the 
park, compared to driving from work to the mall to the park. The journey 
from the mall to the park is the same, no matter how you got to the mall. 
If you can drive from home to the mall and then to the park, and you can 
drive from work to the mall, then you can be sure that you can drive from 
work to the mall to the park even though you've never done it before.

But if you can't be sure the paths are independent, then you can't make 
that simplifying assumption, and you do have to test more paths in more 
places.


 A related matter (at least in my mind) is this: after I've written
 test_spam() but before spam() is correctly working, I find out that I
 need to write spam_ham() and spam_eggs(), so I need test_spam_ham() and
 test_spam_eggs(). That means that I can never have a green light while
 coding test_spam_ham

Re: Exhaustive Unit Testing

2008-11-29 Thread Fuzzyman
On Nov 29, 3:33 am, Emanuele D'Arrigo [EMAIL PROTECTED] wrote:
 On Nov 29, 12:35 am, Fuzzyman [EMAIL PROTECTED] wrote:

  Your experiences are one of the reasons that writing the tests *first*
  can be so helpful. You think about the *behaviour* you want from your
  units and you test for that behaviour - *then* you write the code
  until the tests pass.

 Thank you Michael, you are perfectly right in reminding me this. At
 this particular point in time I'm not yet architecturally foresighted
 enough to be able to do that. While I was writing the design documents
 I did write the list of methods each object would have needed and from
 that description theoretically I could have made the tests first. In
 practice, while eventually writing the code for those methods, I've
 come to realize that there was a large amount of variance between what
 I -thought- I needed and what I -actually- needed. So, had I written
 the test before, I would have had to rewrite them again. That been
 said, I agree that writing the tests before must be my goal. I hope
 that as my experience increases I'll be able to know beforehand the
 behaviors I need from each method/object/module of my applications.
 One step at the time I'll get there... =)



Personally I find writing the tests an invaluable part of the design
process. It works best if you do it 'top-down'. i.e. You have a
written feature specification (a user story) - you turn this into an
executable specification in the form of a functional test.

Next to mid level unit tests and downwards - so your tests become your
design documents (and the way you think about design), but better than
a document they are executable. So just like code conveys intent so do
the tests (it is important that tests are readable).

For the situation where you don't really know what the API should look
like, Extreme Programming (of which TDD is part) includes a practise
called spiking. I wrote a bit about that here:

http://www.voidspace.org.uk/python/weblog/arch_d7_2007_11_03.shtml#e867

Mocking can help reduce the number of code paths you need to test for
necessarily complex code. Say you have a method that looks something
like:

def method(self):
if conditional:
# do stuff
else:
# do other stuff
# then do more stuff

You may be able to refactor this to look more like the following

def method(self):
if conditional:
self.method2()
else:
self.method3()
self.method4()

You can then write unit tests that *only* tests methods 2 - 4 on their
own. That code is then tested. You can then test method by mocking out
methods 2 - 4 on the instance and only need to test that they are
called in the right conditions and with the right arguments (and you
can mock out the return values to test that method handles them
correctly).

Mocking in Python is very easy, but there are plenty of mock libraries
to make it even easier. My personal favourite (naturally) is my own:

http://www.voidspace.org.uk/python/mock.html

All the best,

Michael
--
http://www.ironpythoninaction.com/



 Manu

--
http://mail.python.org/mailman/listinfo/python-list


Re: Exhaustive Unit Testing

2008-11-29 Thread Roel Schroeven
Thanks for your answer. I still don't understand completely though. I 
suppose it's me, but I've been trying to understand some of this for 
quite some and somehow I can't seem to wrap my head around it.


Steven D'Aprano schreef:

On Sat, 29 Nov 2008 11:36:56 +0100, Roel Schroeven wrote:

The first thing to remember is that it is impractical for unit tests to 
be exhaustive. Consider the following trivial function:


def add(a, b):  # a and b ints only
return a+b+1

Clearly you're not expected to test *every imaginable* path through this 
function (ignoring unit tests for error handling and bad input):


assert add(0, 0) == 1
assert add(1, 0) == 2
assert add(2, 0) == 3
assert add(3, 0) == 4
...
assert add(99736263, 8264891001) = 8364627265
...


OK

 ...

I arbitrarily choose path A alone, confident that paths B C and D are 
correct, but of course I could make other choices. There's no need to 
test paths B C and D *within spam's unit tests*, because they are already 
tested elsewhere. 


Except that I'm always told that the goal of unit tests, at least 
partly, is to protect us agains mistakes when we make changes to the 
tested functions. They should tell me wether I can still trust spam() 
after refactoring it. Doesn't that mean that the unit test should see 
spam() as a black box, providing a certain (but probably not 100%) 
guarantee that the unit test is still a good test even if I change the 
implementation of spam()?


And I don't understand how that works in test-driven development; I 
can't possibly adapt the tests to the code paths in my code, because the 
code doesn't exist yet when I write the test.


 To test them again within spam doesn't gain me anything.

I would think it gains you the freedom of changing spam's implementation 
while still being able to rely on the unit tests. Or maybe I'm thinking 
too far?


The success of this tactic assumes that you can identify code paths and 
make them independent. If they are dependent, then you can't be sure that 
path E G after A is the same as E G after D.


Real world example: compare driving your car from home to the mall to the 
park, compared to driving from work to the mall to the park. The journey 
from the mall to the park is the same, no matter how you got to the mall. 
If you can drive from home to the mall and then to the park, and you can 
drive from work to the mall, then you can be sure that you can drive from 
work to the mall to the park even though you've never done it before.


But if you can't be sure the paths are independent, then you can't make 
that simplifying assumption, and you do have to test more paths in more 
places.


OK, but that only works if I know the code paths, meaning I've already 
written the code. Wasn't the whole point of TDD that you write the tests 
before the code?



A related matter (at least in my mind) is this: after I've written
test_spam() but before spam() is correctly working, I find out that I
need to write spam_ham() and spam_eggs(), so I need test_spam_ham() and
test_spam_eggs(). That means that I can never have a green light while
coding test_spam_ham() and test_stam_eggs(), since test_spam() will
fail. That feels wrong. 


I would say that means you're letting your tests get too far ahead of 
your code. In theory, you should never have more than one failing test at 
a time: the last test you just wrote. If you have to refactor code so 
much that a bunch of tests start failing, then you need to take those 
tests out, and re-introduce them one at a time. 


I still fail to see how that works. I know I must be wrong since so many 
people successfully apply TDD, but I don't see what I'm missing.


Let's take a more-or-less realistic example: I want/need a function to 
calculate the least common multiple of two numbers. First I write some 
tests:


assert(lcm(1, 1) == 1)
assert(lcm(2, 5) == 10)
assert(lcm(2, 4) == 4)

Then I start to write the lcm() function. I do some research and I find 
out that I can calculate the lcm from the gcd, so I write:


def lcm(a, b):
  return a / gcd(a, b) * b

But gcd() doesn't exist yet, so I write some tests for gcd(a, b) and 
start writing the gcd function. But all the time while writing that, the 
lcm tests will fail.


I don't see how I can avoid that, unless I create gcd() before I create 
lcm(), but that only works if I know that I'm going to need it. In a 
simple case like this I could know, but in many cases I don't know it 
beforehand.


--
The saddest aspect of life right now is that science gathers knowledge
faster than society gathers wisdom.
  -- Isaac Asimov

Roel Schroeven
--
http://mail.python.org/mailman/listinfo/python-list


Re: Exhaustive Unit Testing

2008-11-29 Thread Steven D'Aprano
On Sat, 29 Nov 2008 17:13:00 +0100, Roel Schroeven wrote:

 Except that I'm always told that the goal of unit tests, at least
 partly, is to protect us agains mistakes when we make changes to the
 tested functions. They should tell me wether I can still trust spam()
 after refactoring it. Doesn't that mean that the unit test should see
 spam() as a black box, providing a certain (but probably not 100%)
 guarantee that the unit test is still a good test even if I change the
 implementation of spam()?

Yes, but you get to choose how strong that guarantee is. If you want to 
test the same thing in multiple places in your code, you're free to do 
so. Refactoring merely reduces the minimum number of tests you need for 
complete code coverage, not the maximum.

The aim here isn't to cut the number of unit tests down to the absolute 
minimum number required to cover all paths through your code, but to 
reduce that minimum number to something tractable: O(N) or O(N**2) 
instead of O(2**N), where N = some appropriate measure of code complexity.

It is desirable to have some redundant tests, because they reduce the 
chances of a freakish bug just happening to give the correct result for 
the test but wrong results for everything else. (Assuming of course that 
the redundant tests aren't identical -- you gain nothing by running the 
exact same test twice.) They also give you extra confidence that you can 
refactor the code without introducing such freakish bugs. But if you find 
yourself making such sweeping changes to your code base that you no 
longer have such confidence, then by all means add more tests!


 
 And I don't understand how that works in test-driven development; I
 can't possibly adapt the tests to the code paths in my code, because the
 code doesn't exist yet when I write the test.

That's where you should be using mocks and stubs to ease the pain.

http://en.wikipedia.org/wiki/Mock_object
http://en.wikipedia.org/wiki/Method_stub


 
   To test them again within spam doesn't gain me anything.
 
 I would think it gains you the freedom of changing spam's implementation
 while still being able to rely on the unit tests. Or maybe I'm thinking
 too far?

No, you are right, and I over-stated the case.


[snip]
 I would say that means you're letting your tests get too far ahead of
 your code. In theory, you should never have more than one failing test
 at a time: the last test you just wrote. If you have to refactor code
 so much that a bunch of tests start failing, then you need to take
 those tests out, and re-introduce them one at a time.
 
 I still fail to see how that works. I know I must be wrong since so many
 people successfully apply TDD, but I don't see what I'm missing.
 
 Let's take a more-or-less realistic example: I want/need a function to
 calculate the least common multiple of two numbers. First I write some
 tests:
 
 assert(lcm(1, 1) == 1)
 assert(lcm(2, 5) == 10)
 assert(lcm(2, 4) == 4)

(Aside: assert is not a function, you don't need the parentheses.)

Arguably, that's too many tests. Start with one.

assert lcm(1, 1) == 1

And now write lcm:

def lcm(a, b):
return 1

That's a stub, and our test passes. So add another test:

assert lcm(2, 5) == 10

and the test fails. So let's fix the function by using gcd.

def lcm(a, b):
return a/gcd(a, b)*b

(By the way: there's a subtle bug in lcm() that will hit you in Python 3. 
Can you spot it? Here's a hint: your unit tests should also assert that 
the result of lcm is always an int.)

Now that we've introduced a new function, we need a stub and a test for 
it:

def gcd(a, b):
return 1

Why does the stub return 1? So it will make the lcm test pass. If we had 
more lcm tests, it would be harder to write a gcd stub, hence the 
insistence of only adding a single test at a time.

assert gcd(1, 1) == 1

Now that all the tests work and we get a nice green light. Let's add 
another test. We need to add it to the gcd test suite, because it's the 
latest, least working function. If you add a test to the lcm test suite, 
and it fails, you don't know if it failed because of an error in lcm() or 
because of an error in gcd(). So leave lcm alone until gcm is working:

assert gcd(2, 5) == 2

Now go and fix gcd. At some time you have to decide to stop using a stub 
for gcd, and write the function properly. For a function that simple, 
now is that time, but just for the exercise let me write a slightly 
more complicated stub. This is (probably) the next simplest stub which 
allows all the tests to pass while still being wrong:

def gcd(a, b):
if a == b:
return 1
else:
return 2

When you're convinced gcd() is working, you can go back and add 
additional tests to lcm.

In practice, of course, you can skip a few steps. It's hard to be 
disciplined enough to program in such tiny little steps. But the cost of 
being less disciplined is that it takes longer to have all the tests pass.



-- 
Steven
--

Re: Exhaustive Unit Testing

2008-11-29 Thread Steven D'Aprano
On Sun, 30 Nov 2008 03:42:50 +, Steven D'Aprano wrote:

 def lcm(a, b):
 return a/gcd(a, b)*b
 
 (By the way: there's a subtle bug in lcm() that will hit you in Python 
3. Can you spot it?

Er, ignore this. Division in Python 3 only returns a float if the 
remainder is non-zero, and when dividing by the gcd the remainder should 
always be zero.

 Here's a hint: your unit tests should also assert that 
 the result of lcm is always an int.)

Still good advise.



-- 
Steven
--
http://mail.python.org/mailman/listinfo/python-list


Re: Exhaustive Unit Testing

2008-11-29 Thread Terry Reedy

Steven D'Aprano wrote:

On Sun, 30 Nov 2008 03:42:50 +, Steven D'Aprano wrote:


def lcm(a, b):
return a/gcd(a, b)*b

(By the way: there's a subtle bug in lcm() that will hit you in Python 

3. Can you spot it?

Er, ignore this. Division in Python 3 only returns a float if the 
remainder is non-zero, and when dividing by the gcd the remainder should 
always be zero.


You were right the first time.
IDLE 3.0rc3
 a = 4/2
 a
2.0

lcm should return an int so should be written a//gcd(a,b) * b
to guarantee that.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Exhaustive Unit Testing

2008-11-28 Thread bearophileHUGS
Terry Reedy:

The problem is that inner functions do not exist until the outer function is 
called and the inner def is executed.  And they cease to exist when the outer 
function returns unless returned or associated with a global name or 
collection.

OK.


A 'function' only needs to be nested if it is intended to be different 
(different default or closure) for each execution of its def.

Or maybe because you want to denote a logical nesting, or maybe
because you want to keep the outer namespace cleaner, etc etc.

---

Benjamin:

Of course, you could resort to terrible evil like this:

My point was of course to ask about possible changes to CPython, so
you don't need evil hacks anymore.

---

Steven D'Aprano:

For this to change wouldn't be a little change, it would be a large change.

I see, then my proposal has little hope, I presume. I'll have to keep
moving functions outside to test them and move them inside again when
I want to run them.


However you can get the same result (and arguably this is the Right Way to do 
it) with a class:

Of course, that's the Right Way only for languages that support only a
strict form of the Object Oriented paradigm, like for example Java.

Thank you to all the people that have answered.

Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list


Re: Exhaustive Unit Testing

2008-11-28 Thread Bruno Desthuilliers

Steven D'Aprano a écrit :
(snip)
Consequently, I almost always use single-underscore private by 
convention names, rather than double-underscore names. The name-mangling 
is, in my experience, just a nuisance.


s/just/most often than not/ and we'll agree on this !-)

--
http://mail.python.org/mailman/listinfo/python-list


Re: Exhaustive Unit Testing

2008-11-28 Thread Nigel Rantor

Roy Smith wrote:


There's a well known theory in studies of the human brain which says people 
are capable of processing about 7 +/- 2 pieces of information at once.  


It's not about processing multiple taks, it's about the amount of things 
that can be held in working memory.


  n

--
http://mail.python.org/mailman/listinfo/python-list


Re: Exhaustive Unit Testing

2008-11-28 Thread Roy Smith
In article [EMAIL PROTECTED],
 Nigel Rantor [EMAIL PROTECTED] wrote:

 Roy Smith wrote:
  
  There's a well known theory in studies of the human brain which says people 
  are capable of processing about 7 +/- 2 pieces of information at once.  
 
 It's not about processing multiple taks, it's about the amount of things 
 that can be held in working memory.

Yes, and that's what I'm talking about here.  If there's N possible code 
paths, can you think about all of them at the same time while you look at 
your code and simultaneously understand them all?  If I write:

if foo:
   do_this()
else:
   do_that()

it's easy to get a good mental picture of all the consequences of foo being 
true or false.  As the number of paths go up, it becomes harder to think of 
them all as a coherent piece of code and you have to resort to examining 
the paths sequentially.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Exhaustive Unit Testing

2008-11-28 Thread Steven D'Aprano
On Fri, 28 Nov 2008 00:06:01 -0800, bearophileHUGS wrote:

For this to change wouldn't be a little change, it would be a large
change.
 
 I see, then my proposal has little hope, I presume. I'll have to keep
 moving functions outside to test them and move them inside again when I
 want to run them.

Not so. You just need to prevent the nested functions from being garbage 
collected.

def foo(x):
if not hasattr(foo, 'bar'):
def bar(y):
return x+y
def foobar(y):
return 3*y-y**2
foo.bar = bar
foo.foobar = foobar
return foo.bar(3)+foo.foobar(3)


However, note that there is a problem here: foo.bar will always use the 
value of x as it were when it was called the first time, *not* the value 
when you call it subsequently:

 foo(2)
5
 foo(3000)
5

Because foo.foobar doesn't use x, it's safe to use; but foo.bar is a 
closure and will always use the value of x as it was first used. This 
could be a source of puzzling bugs.

There are many ways of dealing with this. Here is one:

def foo(x):
def bar(y):
return x+y
def foobar(y):
return 3*y-y**2
foo.bar = bar
foo.foobar = foobar
return bar(3)+foobar(3)

I leave it as an exercise for the reader to determine how the behaviour 
of this is different to the first version.


-- 
Steven
--
http://mail.python.org/mailman/listinfo/python-list


Re: Exhaustive Unit Testing

2008-11-28 Thread Terry Reedy

[EMAIL PROTECTED] wrote:

Terry Reedy:




A 'function' only needs to be nested if it is intended to be
different (different default or closure) for each execution of its
def.


Or maybe because you want to denote a logical nesting, or maybe 
because you want to keep the outer namespace cleaner, etc etc.


I was already aware of those *wants*, but they are not *needs*, in the 
sense I meant.  A single constant function does not *need* to be nested 
and regenerated with each call.


A standard idiom, I think, is give the foo-associated function a private 
foo-derived name such as _foo or _foo_bar.  This keeps the public 
namespace clean and denotes the logical nesting.  I *really* would not 
move things around after testing.


For the attribute approach, you could lobby for the so-far rejected

def foo(x):
  return foo.bar(3*x)

def foo.bar(x):
  return x*x

In the meanwhile...

def foo(x):
  return foo.bar(3*x)

def _(x):
  return x*x
foo.bar = _

Or write a decorator so you can write

@funcattr(foo. 'bar')
def _

(I don't see any way a function can delete a name in the namespace that 
is going to become the class namespace, so that

@fucattr(foo)
def bar...
would work.)

Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Exhaustive Unit Testing

2008-11-28 Thread Fuzzyman
On Nov 27, 4:32 pm, Emanuele D'Arrigo [EMAIL PROTECTED] wrote:
 On Nov 27, 5:00 am, Steven D'Aprano

 [EMAIL PROTECTED] wrote:
  Refactor until your code is simple enough to unit-test effectively, then
  unit-test effectively.

 Ok, I've taken this wise suggestion on board and of course I found
 immediately ways to improve the method. -However- this generates
 another issue. I can fragment the code of the original method into one
 public method and a few private support methods. But this doesn't
 reduce the complexity of the testing because the number and complexity
 of the possible path stays more or less the same. The solution to this
 would be to test the individual methods separately, but is the only
 way to test private methods in python to make them (temporarily) non
 private? I guess ultimately this would only require the removal of the
 appropriate double-underscores followed by method testing and then
 adding the double-underscores back in place. There is no cleaner
 way, is there?

 Manu


Your experiences are one of the reasons that writing the tests *first*
can be so helpful. You think about the *behaviour* you want from your
units and you test for that behaviour - *then* you write the code
until the tests pass.

This means that your code is testable, but which usually means simpler
and better.

By the way, to reduce the number of independent code paths you need to
test you can use mocking. You only need to test the logic inside the
methods you create (testing behaviour), and not every possible
combination of paths.

Michael Foord
--
http://www.ironpythoninaction.com/
--
http://mail.python.org/mailman/listinfo/python-list


Re: Exhaustive Unit Testing

2008-11-28 Thread Emanuele D'Arrigo
Thank you to everybody who has replied about the original problem. I
eventually refactored the whole (monster) method over various smaller
and simpler ones and I'm now testing each individually. Things have
gotten much more tractable. =)

Thank you for nudging me in the right direction! =)

Manu
--
http://mail.python.org/mailman/listinfo/python-list


Re: Exhaustive Unit Testing

2008-11-28 Thread Emanuele D'Arrigo
On Nov 29, 12:35 am, Fuzzyman [EMAIL PROTECTED] wrote:
 Your experiences are one of the reasons that writing the tests *first*
 can be so helpful. You think about the *behaviour* you want from your
 units and you test for that behaviour - *then* you write the code
 until the tests pass.

Thank you Michael, you are perfectly right in reminding me this. At
this particular point in time I'm not yet architecturally foresighted
enough to be able to do that. While I was writing the design documents
I did write the list of methods each object would have needed and from
that description theoretically I could have made the tests first. In
practice, while eventually writing the code for those methods, I've
come to realize that there was a large amount of variance between what
I -thought- I needed and what I -actually- needed. So, had I written
the test before, I would have had to rewrite them again. That been
said, I agree that writing the tests before must be my goal. I hope
that as my experience increases I'll be able to know beforehand the
behaviors I need from each method/object/module of my applications.
One step at the time I'll get there... =)

Manu
--
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   >