[Python-Dev] SSL module backport package ready for more testing
I've posted an sdist version of the 'ssl' module for Pythons 2.3.5 to 2.5.x, at http://www.parc.com/janssen/transient/ssl-1.3.tar.gz. I think this is 'gold master', but before I upload it to the Cheeseshop, I'd like to get more testing on a broader variety of platforms. The intent of this package is to allow code development with older versions of Python that will continue to work on Python 2.6 and 3.x. To build, python setup.py build To test, python setup.py test I'd appreciate feedback on testing results; please send to [EMAIL PROTECTED] Thanks! Bill ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Building Python with CMake
Hi, On Thursday 30 August 2007 16:28, Alexander Neundorf wrote: ... > The cmake files for building python are now in a cvs repository: > http://www.cmake.org/cgi-bin/viewcvs.cgi/Utilities/CMakeBuildForPython/?roo >t=ParaView3 > > This is inside the ParaView3 repository: > http://www.paraview.org/New/download.html > > I used them today to build Python from svn trunk. > > I'll add some documentation how to use them, how to get them and what works > and what doesn't work tomorrow. Ok, it took a bit longer. The wiki page is here: http://paraview.org/ParaView3/index.php/BuildingPythonWithCMake With the cmake files from cvs you can build Python svn, which will become Python 2.6. It use it for Linux, IBM BlueGene/L and Cray Xt3 (in both cases for the compute nodes, not the front end nodes). It works also for Windows, but I didn't take the time to check that all the configure checks deliver the correct results, so I just reused the premade pyconfig.h there. Most modules are built now. For every module you can select whether to build it statically or dynamically or not at all. Source and binary packages can be created using "make packages". These files don't conflict with any files in Python svn, so if somebody is interested adding them to Python svn shouldn't cause any problems. Bye Alex P.S. due to moving I'll be mainly offline in the next weeks ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Building Python with CMake
On 9/20/07, Alexander Neundorf <[EMAIL PROTECTED]> wrote: > On Thursday 30 August 2007 16:28, Alexander Neundorf wrote: > ... > > The cmake files for building python are now in a cvs repository: > > http://www.cmake.org/cgi-bin/viewcvs.cgi/Utilities/CMakeBuildForPython/?roo > >t=ParaView3 Thanks for your work on this! That page seems to require a login. Any chance you could post it to something like:: http://wiki.python.org/moin/BuildingPythonWithCMake STeVe -- I'm not *in*-sane. Indeed, I am so far *out* of sane that you appear a tiny blip on the distant coast of sanity. --- Bucky Katt, Get Fuzzy ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Building Python with CMake
On Thursday 20 September 2007 16:58, Steven Bethard wrote: > On 9/20/07, Alexander Neundorf <[EMAIL PROTECTED]> wrote: > > On Thursday 30 August 2007 16:28, Alexander Neundorf wrote: > > ... > > > > > The cmake files for building python are now in a cvs repository: > > > http://www.cmake.org/cgi-bin/viewcvs.cgi/Utilities/CMakeBuildForPython/ > > >?roo t=ParaView3 > > Thanks for your work on this! That page seems to require a login. > Any chance you could post it to something like:: > > http://wiki.python.org/moin/BuildingPythonWithCMake I guess I need a login there too, so I put it somewhere where I already have one: http://www.cmake.org/Wiki/BuildingPythonWithCMake Alex ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Better unittest failures
The value of a unittest test is not in how well they pass, but in how well
they fail.
While looking at possibly helping with the str_uni branch when that was
going on I found that in some cases unittest failure results can take a
little bit (or a lot) of work to figure out just what was failing, where
and why.
While helping Eric test the new format function and class I came up with a
partial solution which may be a bases for further improvements. Eric told
me it did help quite a bit. So I think it's worth looking into.
Since we were running over a hundred different options over several
different implementations to make sure they all passed and failed in the
same way, we were using data based test cases so we could easily test the
same data with each version. Unfortunately that has a drawback that the
traceback doesn't show what data was used when testing exceptions.
Additionally when something did fail it was not always obvious what and why
it was failing.
One of the conclusions I came to is it would be better if tests did not
raise standard python exceptions unless the test itself has a problem. By
having tests raise special *Test_Only* exceptions, it can make the output
of the test very much clearer.
Here are the added Test_Only Excepitons. These would only be in the
unittest module to catch the following situations.
Wrong_Result_Returned
Unexpected_Exception_Raised
No_Exception_Raised
Wrong_Exception_Raised
And two new functions that use them.
assertTestReturns(expect, test, message)
assertTestRaises(expect, test, message)
These additions would not effect any existing tests. To use these requires
the code to be tested to be wrapped in a function with no arguments. And
it is the same format for both assertTestReturns and assertTestRaises.
for data in testdata:
expect, a, b, c = data
def test():
return foo(a, b, c)
assertTestReturns(expect, test, repr(data))
Replacing all existing tests with this form isn't reasonable but adding
this as an option for those who want to use it is very easy to do.
The test file I used to generate the following output is attached.
Cheers,
Ron
###
#
# Test output using standard assertEquals and assertRaises.
#
* The data has the form [(ref#, expect, args, kwds), ...]
* The ref# is there to help find the failing test for situation where
you may have dozens of almost identical data. It's not required but
helpful to have.
* I didn't include actual bad testcase tests in these examples, but if
some generated exceptions similar to the that of the failing tests, I think
it could add a bit more confusion to the situation than the not too
confusing example here.
$ python ut_test.py
EEFF
==
ERROR: test_A (__main__.test1_normal_failures)
--
Traceback (most recent call last):
File "ut_test.py", line 100, in test_A
result = some_function(*args, **kwds)
File "ut_test.py", line 62, in some_function
baz = kwds['baz']
KeyError: 'baz'
#
# This fails as a test "error" instead of a test "fail".
# What was args and kwds here?
#
==
ERROR: test_B (__main__.test1_normal_failures)
--
Traceback (most recent call last):
File "ut_test.py", line 108, in test_B
self.assertRaises(expect, test, args, kwds)
File "unittest.py", line 320, in failUnlessRaises
callableObj(*args, **kwargs)
File "ut_test.py", line 107, in test
return some_function(*args, **kwds)
File "ut_test.py", line 62, in some_function
baz = kwds['baz']
KeyError: 'baz'
#
# Same as above. Fails as a test "error", unkown arguments
# values for some_function().
#
==
FAIL: test_C (__main__.test1_normal_failures)
--
Traceback (most recent call last):
File "ut_test.py", line 114, in test_C
self.assertRaises(expect, test, args, kwds)
AssertionError: KeyError not raised
#
# What was args, and kwds values?
#
==
FAIL: test_D (__main__.test1_normal_failures)
--
Traceback (most recent call last):
File "ut_test.py", line 120, in test_D
repr((n, expect, args, kwds)))
AssertionError: (8, ('Total baz:', 4), [1, 2], {'baz': 'Total baz:'})
#
# This one is ok.
#
###
#
# Test output using the added methods and test only exceptions with
# the same test data.
#
* Test errors only occur on actual test "errors".
* The reason for the fail is explained in all cases for test "fails".
* The only t
