[Numpy-discussion] Adding a version system to the deprecation functions in numpy/lib/utils ?
Hi, I was thinking about adding a versioning into the deprecation related functions. E.g., you could say that one function is deprecated in version 1.2, but adding a in_version in deprecate. Does anyone have strong feeling against it ? It should be a transparant change. I would also like to add a deprecation function to handle arguments and default values, but this will be more work, obviously. cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Adding a version system to the deprecation functions in numpy/lib/utils ?
On Fri, Jun 20, 2008 at 01:29, David Cournapeau [EMAIL PROTECTED] wrote: Hi, I was thinking about adding a versioning into the deprecation related functions. E.g., you could say that one function is deprecated in version 1.2, but adding a in_version in deprecate. Does anyone have strong feeling against it ? As long as we keep the policy that DeprecationWarnings last precisely one minor version, I don't think it really adds any information. It should be a transparant change. I would also like to add a deprecation function to handle arguments and default values, but this will be more work, obviously. It will be hard to write something generic. The individual functions will still have to write code that handles the old arguments anyways, so all it would do is move the warning message from a warnings.warn() into the decorator. It's not worth it. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Adding a version system to the deprecation functions in numpy/lib/utils ?
Robert Kern wrote: As long as we keep the policy that DeprecationWarnings last precisely one minor version, I don't think it really adds any information. I was not aware of this policy. Obviously, it has no use in that case. It will be hard to write something generic. The individual functions will still have to write code that handles the old arguments anyways, so all it would do is move the warning message from a warnings.warn() into the decorator. It's not worth it. If we change default values of some functions, you don't think it would be useful to raise a warning if people do not use the argument (that is use the default value) ? I remember a long time ago, when some functions got their axis argument changed (pre 1.0 numpy), it took me a long time to realize why my code broke. By saying the function has to handle the old argument, you are implying that we don't allow API changes, right ? If we enforce this, again, deprecating argument/default value has no value at all. But up to now, it has happened fairly regularly. cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Adding a version system to the deprecation functions in numpy/lib/utils ?
On Fri, Jun 20, 2008 at 01:54, David Cournapeau [EMAIL PROTECTED] wrote: Robert Kern wrote: It will be hard to write something generic. The individual functions will still have to write code that handles the old arguments anyways, so all it would do is move the warning message from a warnings.warn() into the decorator. It's not worth it. If we change default values of some functions, you don't think it would be useful to raise a warning if people do not use the argument (that is use the default value) ? I remember a long time ago, when some functions got their axis argument changed (pre 1.0 numpy), it took me a long time to realize why my code broke. I don't see a reason to wrap it in a decorator. It saves no significant effort. By saying the function has to handle the old argument, you are implying that we don't allow API changes, right ? If we enforce this, again, deprecating argument/default value has no value at all. But up to now, it has happened fairly regularly. It should not be happening regularly in the future. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Adding a version system to the deprecation functions in numpy/lib/utils ?
Robert Kern wrote: I don't see a reason to wrap it in a decorator. It saves no significant effort. If the warning is spit out in any case (whether the user uses the default value or not), and if we don't need versioning, I understand it is not that useful. I guess my concern is what do we do with deprecated functions: do we keep them, for how long ? There are some deprecate in current numpy which dates back to the pre 1.0 release. Maybe that's not really important. It should not be happening regularly in the future. Ok. David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Adding a version system to the deprecation functions in numpy/lib/utils ?
On Fri, Jun 20, 2008 at 02:15, David Cournapeau [EMAIL PROTECTED] wrote: I guess my concern is what do we do with deprecated functions: do we keep them, for how long ? There are some deprecate in current numpy which dates back to the pre 1.0 release. Maybe that's not really important. They should deprecated for one minor version and then deleted. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Adding a version system to the deprecation functions in numpy/lib/utils ?
Robert Kern wrote: They should deprecated for one minor version and then deleted. Ok, thanks for the clarification. Since I was not aware of this, and did not see this information on the wiki, I added it in http://scipy.org/scipy/numpy/wiki/ApiDeprecation. Let me know if there is something I misunderstood from your remarks, cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Detecting phase windings
Hi Anne, Thanks for the approach ideas - I'll take a look at this soon to try to understand it. Currently I'm visiting a LabView-based lab who already have something that works, and works fast, so I'm being encouraged to use LabView, but I'd like to show them more of the joys of Python. The memory requirements aren't yet an issue with the data sets I'm using, but they could be later. Anne Archibald wrote: 2008/6/16 [EMAIL PROTECTED] [EMAIL PROTECTED]: I have a speed problem with the approach I'm using to detect phase wrappings in a 3D data set. big snip I'd start by looking at the problem one face at a time: def find_vortices(X, axis=0): XX = np.rollaxis(X,axis) loop = np.concatenate((XX[np.newaxis,:-1,:-1,...], XX[np.newaxis,1:,:-1,...], XX[np.newaxis,1:,1:,...], XX[np.newaxis,:-1,1:,...], XX[np.newaxis,:-1,:-1,...]),axis=0) loop = np.unwrap(loop) r = np.abs(loop[0,...]-loop[-1,...])np.pi/2 return np.rollaxis(r,0,axis+1) snip standard trick for cutting down on temporary sizes: use a for loop along the smallest dimension and a vector along the rest. In fact in this case I'd use a 2D vortex finder and iterate along the remaining axis; do this three times and you're done. I might also try three 1D finders, keeping three temporary boolean result arrays, then logically OR them. thanks, Gary R. ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Sintax differences
Hi Is there a reason why the syntax is rand(n,n), but zeros((n,n)) to create an n*n array? If not, perhaps this could be cleaned up? Regards, Izak Marais ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Sintax differences
On Fri, Jun 20, 2008 at 4:28 AM, izak marais [EMAIL PROTECTED] wrote: Hi Is there a reason why the syntax is rand(n,n), but zeros((n,n)) to create an n*n array? If not, perhaps this could be cleaned up? SciPy's rand() is just a convenience wrapper: help(rand) Help on built-in function rand: rand(...) Return an array of the given dimensions which is initialized to random numbers from a uniform distribution in the range [0,1). rand(d0, d1, ..., dn) - random values Note: This is a convenience function. If you want an interface that takes a tuple as the first argument use numpy.random.random_sample(shape_tuple). -- Nathan Bell [EMAIL PROTECTED] http://graphics.cs.uiuc.edu/~wnbell/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Sintax differences
I apologise for the horrible spelling mistake in the mail title! ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Weird error with nose tests on windows
Hi, While trying to reproduce some bugs on windows with recent numpy, I got major failures for the whole test suite, with something related to numeric not found in numpy\core\__init__.py (__all__ += numeric.__all__ line). Nose crashes because at that line, meaning most of the tests fail; note that this is nose specific, importing numpy works fine... which is strange. I don't understand how importing __all__ += numeric.__all__ is supposed to work since there was no import numeric before. IOW, I understand why it fails, but I don't understand why it does not fail when importing (or with tests on linux either; maybe the different nose version). Does it make sense to put an import numeric before in the __init__.py file ? cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] CAPI segfault running python embedded
I have successfully written several extension modules that use the numpy CAPI to manipulate numpy arrays. I have to say numpy is great. I am now having a problem, however, when calling numpy capi functions when running python embedded in a third party, closed source, application. It segfaults whenever I make any PyArray* function call. Any ideas on why that might be happening? Thanks, Lane ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] CAPI segfault running python embedded
Lane Brooks wrote: I have successfully written several extension modules that use the numpy CAPI to manipulate numpy arrays. I have to say numpy is great. I am now having a problem, however, when calling numpy capi functions when running python embedded in a third party, closed source, application. It segfaults whenever I make any PyArray* function call. Any ideas on why that might be happening? You most likely forgot to call import_array functions when initializing your extension (the PyMODULE_INIT function): with python, extensions which define new C-API does it through an array of function pointers, which is garbage before you call import_array. I have never used embedded python, so I don't know how initialization works there, but that should be similar I guess. cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] CAPI segfault running python embedded
David Cournapeau wrote: Lane Brooks wrote: I have successfully written several extension modules that use the numpy CAPI to manipulate numpy arrays. I have to say numpy is great. I am now having a problem, however, when calling numpy capi functions when running python embedded in a third party, closed source, application. It segfaults whenever I make any PyArray* function call. Any ideas on why that might be happening? You most likely forgot to call import_array functions when initializing your extension (the PyMODULE_INIT function): with python, extensions which define new C-API does it through an array of function pointers, which is garbage before you call import_array. I have never used embedded python, so I don't know how initialization works there, but that should be similar I guess. cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion Your exactly right. I was side-tracked by the fact that this was embedded python (which is not really that different) and thought was thinking it was an environmental condition. Adding the import_array() cleared the problem up immediately. Thanks for the help. Lane ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Right way of looping throught ndarrays using Cython
Hi, I am trying to figure the right way of looping throught ndarrays using Cython, currently. Things seem to have evolved a bit compared to what some documents on the web claim (eg for i from 0=in does not seem faster than for i in range(n)). I know there is integration of pex with cython on the horizon, but it is not there yet, and I don't want to commit to a moving target. Does somebody have a example of fast looping through ndarrays using modern Cython idioms? Cheers, Gaël ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Updating cython directory to pxd usage: objections?
On Thu, Jun 19, 2008 at 9:18 PM, Fernando Perez [EMAIL PROTECTED] wrote: On Thu, Jun 19, 2008 at 5:29 PM, Robert Kern [EMAIL PROTECTED] wrote: I just tested the changes and there seem to be no ill effects. Should I go ahead and commit it? I'll do it in a single commit with no other changes so it's easy to revert should it prove to be a bad idea. Sure. Thanks. Done in r5301. Well, the simplistic test script we had in didn't show any problems, but Matthew Brett was more careful than me and went looking into the generated code: On Fri, Jun 20, 2008 at 1:32 AM, Matthew Brett [EMAIL PROTECTED] wrote: Hi, I just tested the changes and there seem to be no ill effects. Should I go ahead and commit it? I'll do it in a single commit with no other changes so it's easy to revert should it prove to be a bad idea. Sure. Thanks. Done in r5301. Hmm - but does import_array() get called when it's in the pxd file? I just updated svn and generated the numpyx.c file, and can't see import_array() there. Isn't the pxd file just definitions? And indeed, it was my fault for not RTFM: http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/version/Doc/Manual/sharing.html#mozTocId411233 which says: It [the .pxd file] cannot contain the implementations of any C or Python functions, or any Python class definitions, or **any executable statements**. (emphasis mine). I verified further by putting the import_array() back into the .pyx file and indeed: - i_a() in .pxd - missing from .c file. - i_a() in .pyx - OK in .c file. It thus seems that we must keep the import_array call out of the .pxd and users still need to remember to make it themselves. I'll go ahead and clean up, since the mess was my fault. Many thanks to Matthew for catching it! Cheers, f ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Updating cython directory to pxd usage: objections?
On Fri, Jun 20, 2008 at 11:11 AM, Andrew Straw [EMAIL PROTECTED] wrote: Fernando Perez wrote: I verified further by putting the import_array() back into the .pyx file and indeed: - i_a() in .pxd - missing from .c file. - i_a() in .pyx - OK in .c file. It thus seems that we must keep the import_array call out of the .pxd and users still need to remember to make it themselves. It's not the worst thing in the world, either -- sometimes one wants a bit of the C structures layout from the .pxd file to use the array interface without actually calling any of the numpy C API. Thus, occasionally, calling import_array() would be a (probably very minor) waste. However, this appears to be a completely moot point now... Yup, fixed in r5303. Sorry for the (small) mess. Cheers, f ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Updating cython directory to pxd usage: objections?
Hi, As a matter of interest, what is the relationship, if any, between (in Cython) import numpy and cnp.import_array() Are they initializing different copies of the same thing? Best, Matthew ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Right way of looping throught ndarrays using Cython
On Fri, Jun 20, 2008 at 02:07:08PM -0500, Robert Kern wrote: Does somebody have a example of fast looping through ndarrays using modern Cython idioms? If you're using normal Python indexing, then that's where all your time is being spent. You need to grab the actual .data pointer and do C indexing to get speed. Can you show us the code you are timing? That is indeed what I was thinking. There is no way of doing this appart by using the point-style indexing? I guess I am trying to find the best (as in most readable) way of doing this. This is for teaching, not production, so I am very much interested in having something simple. I am attaching my test file. Cheers, Gaël ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Right way of looping throught ndarrays using Cython
Oups, forgot the attachement. On Fri, Jun 20, 2008 at 09:13:34PM +0200, Gael Varoquaux wrote: On Fri, Jun 20, 2008 at 02:07:08PM -0500, Robert Kern wrote: Does somebody have a example of fast looping through ndarrays using modern Cython idioms? If you're using normal Python indexing, then that's where all your time is being spent. You need to grab the actual .data pointer and do C indexing to get speed. Can you show us the code you are timing? That is indeed what I was thinking. There is no way of doing this appart by using the point-style indexing? I guess I am trying to find the best (as in most readable) way of doing this. This is for teaching, not production, so I am very much interested in having something simple. I am attaching my test file. Cheers, Gaël from numpy import zeros, mgrid # Make sure numpy is initialized. include c_numpy.pxd ## cdef int inner_loop(float c_x, float c_y): cdef float x, y, x_buffer x = 0; y = 0 cdef int i for i in range(50): x_buffer = x*x - y*y + c_x y = 2*x*y + c_y x = x_buffer if (x*x + x*y 100): return 50 - i def do_Mandelbrot_cython(): x, y = mgrid[-1.5:0.5:500j, -1:1:500j] threshold_time = zeros((500, 500)) cdef int i, j for i in range(500): for j in range(500): threshold_time[i, j] = inner_loop(x[i, j], y[i, j]) return threshold_time # %timeit on do_Mandelbrot_cython, on epsilon, gives 761ms per loop def main(): threshold_time = do_Mandelbrot_cython() from pylab import imshow, cm, clf, show clf() imshow(threshold_time, cmap=cm.spectral, extent=(-1, 1, -1, 1)) show() ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Updating cython directory to pxd usage: objections?
On Fri, Jun 20, 2008 at 14:11, Matthew Brett [EMAIL PROTECTED] wrote: Hi, As a matter of interest, what is the relationship, if any, between (in Cython) import numpy and cnp.import_array() Are they initializing different copies of the same thing? No. import numpy is essentially the same as the pure Python equivalent; it loads the module and puts it into the namespace. cnp.import_array() loads the module (or reuses the already loaded one), does not put it into any namespace, but then, most importantly, grabs the pointer to the table of function pointers and assigns it to the local void** variable. All of the numpy API is made available to third-party extension modules by #define macros which look up into this table. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] nose changes checked in
I realise this, but 1659 tests in 8.739s with Nose as opposed to just over 1000 tests in right around 1 second with the previous test system means there is some kind of slowdown involved besides just the number of tests being found. Not that I mind: I'm not looking for blazing speed when running tests, and this Nose system should make it easier to manage the tests. Josh On Wed, Jun 18, 2008 at 6:35 PM, Stéfan van der Walt [EMAIL PROTECTED] wrote: 2008/6/19 Joshua Lippai [EMAIL PROTECTED]: The new testing system works well over here, built on Mac OS X 10.5.2 with GCC 4.2. No errors/failures, but there is that warning Charles mentioned as well as the noticeable difference in speed between this and the old tests. Nose does a more thorough job of finding tests (which takes some time in itself). It executes 1343 tests, compared to just over 1000 before. Regards Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Adding a version system to the deprecation functions in numpy/lib/utils ?
On Fri, Jun 20, 2008 at 9:37 AM, David Cournapeau [EMAIL PROTECTED] wrote: Robert Kern wrote: They should deprecated for one minor version and then deleted. Ok, thanks for the clarification. Since I was not aware of this, and did not see this information on the wiki, I added it in http://scipy.org/scipy/numpy/wiki/ApiDeprecation. Let me know if there is something I misunderstood from your remarks, Hi David, you say: For example, if a function is deprecated in numpy 1.1.0, it will remain so in all 1.1.x releases, but it can and should be removed in 1.2.0. But what if, say, with version 1.1.2 we decide to change a given default argument. So you get the deprecation warning starting with 1.1.2. But now, if this happens to be the last version of 1.1.x. -- you say that already with the next version ( being 1.2.0 ) it would change without a deprecation warning given anymore !? Thanks, Sebastian Haase ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Adding a version system to the deprecation functions in numpy/lib/utils ?
On Fri, Jun 20, 2008 at 15:17, Sebastian Haase [EMAIL PROTECTED] wrote: On Fri, Jun 20, 2008 at 9:37 AM, David Cournapeau [EMAIL PROTECTED] wrote: Robert Kern wrote: They should deprecated for one minor version and then deleted. Ok, thanks for the clarification. Since I was not aware of this, and did not see this information on the wiki, I added it in http://scipy.org/scipy/numpy/wiki/ApiDeprecation. Let me know if there is something I misunderstood from your remarks, Hi David, you say: For example, if a function is deprecated in numpy 1.1.0, it will remain so in all 1.1.x releases, but it can and should be removed in 1.2.0. But what if, say, with version 1.1.2 we decide to change a given default argument. So you get the deprecation warning starting with 1.1.2. But now, if this happens to be the last version of 1.1.x. -- you say that already with the next version ( being 1.2.0 ) it would change without a deprecation warning given anymore !? You missed this part (emphasis added): Deprecation warnings should exist for *one*full*minor*release*cycle* before the deprecated features are removed. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Test framework changes
Hi all, Just wanted to get feedback about the following changes before I make them. Please speak up if any of this seems objectionable to you. - The old test framework classes will be restored, but will not be used anywhere in NumPy's tests. If your old tests still don't work with the restored classes, please let me know and I'll fix it. - Add a function to numpy.testing to execute a module's tests via the if __name__ == '__main__' hook. It takes one optional argument, the name of the file to run (if not present, it uses the same method as NoseTester to look it up from the stack). The intent is to make the boilerplate as simple as possible. if __name__ == '__main__': run_module_suite() If somebody has a suggestion for a better name, I'd love to hear it. I didn't want test in the name, because then I have to explicitly tell nose to ignore it when it's looking for test functions. - Remove numpy/testing/pkgtester.py since it now just has one line of code that imports NoseTester (as Tester) from nosetester.py (it used to create a null tester if nose wasn't present, but this was removed by porting of r4424 from SciPy). NoseTester will still be made available as numpy.testing.Tester. - numpy.test (and all the other test functions in subpackages) will take the following positional arguments: label, verbose, extra_argv, doctests, coverage (the same arguments as SciPy.test except for the new coverage option). The old arguments can be passed in as keyword arguments (which seems to be how they were passed in in all the examples I could find), and they will be emulated as much as possible by the new test suite, but a deprecation warning will be issued. - numpy.test now returns an object with a wasSuccessful method; under the old test framework it returned a unittest._TextTestResult, but since nose.run only returns success/failure, I'm just wrapping this result in a dummy object to match the old behavior until I can figure out how to get a real _TextTestResult from nose. (The two changes to numpy.test should allow the buildbots to run the tests properly again) Thanks, Alan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Right way of looping throught ndarrays using Cython
On 6/20/08, Gael Varoquaux [EMAIL PROTECTED] wrote: I am trying to figure the right way of looping throught ndarrays using Cython, currently. Things seem to have evolved a bit compared to what some documents on the web claim (eg for i from 0=in does not seem faster than for i in range(n)). Regarding for loops, Cython recently gained an optimization (perhaps you already know this, but just in case). If the 'i' variable is a 'cdef' one, then Cython does not actually use 'range', but a plain C for loop. See yourself: cdef void foo(): cdef int i, j=0 for i in range(5,20,3): j += i The generated C code is (comments stripped): static void __pyx_f_7fortest_foo(void) { int __pyx_v_i; int __pyx_v_j; __pyx_v_j = 0; for (__pyx_v_i = 5; __pyx_v_i 20; __pyx_v_i+=3) { __pyx_v_j += __pyx_v_i; } } Really, really nice!!!, doesn't it? You can just forget about the 'for i from ...' form (at least for the 99% of the cases, I think). Additionally, 'for' loops can now also be written like this (without 'from'): for 0= i n: do_stuff() but IMHO, the 'range' optimization is just a big win! -- Lisandro Dalcín --- Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC) Instituto de Desarrollo Tecnológico para la Industria Química (INTEC) Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET) PTLC - Güemes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Specifying a fortran compiler on numpy build.
Hi If I specify a fortran compiler when building numpy, does that have any effect on what is installed? In other words, must I build numpy against a fortran compiler in order to successfully build and use extension written in fortran - such as scipy? Cheers Adam ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Test framework changes
On Fri, Jun 20, 2008 at 16:01, Alan McIntyre [EMAIL PROTECTED] wrote: - numpy.test now returns an object with a wasSuccessful method; under the old test framework it returned a unittest._TextTestResult, but since nose.run only returns success/failure, I'm just wrapping this result in a dummy object to match the old behavior until I can figure out how to get a real _TextTestResult from nose. So NoseTester.run() basically just calls nose.run(). That basically just instantiates nose.core.TestProgram and returns the .success attribute of it. Unfortunately, the TextTestResults object (a nose subclass of unittest._TextTestResults) gets created and discarded inside the nose.core.TestProgram.runTests() method. However, if you were to subclass it and override that method to store the TextTestResults to an attribute, you could return it from NoseTester.run(). -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] NumPy coverage via nose
Hi all, Nose has a plugin (included in the current version of nose) that uses Ned Batchelder's coverage module (http://nedbatchelder.com/code/modules/coverage.html) to provide coverage information. Is it acceptable to use this capability in nose to provide coverage reporting for NumPy? The coverage module is only needed if you actually want to check coverage. Thanks, Alan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Specifying a fortran compiler on numpy build.
On Fri, Jun 20, 2008 at 16:27, Adam Mercer [EMAIL PROTECTED] wrote: Hi If I specify a fortran compiler when building numpy, does that have any effect on what is installed? In other words, must I build numpy against a fortran compiler in order to successfully build and use extension written in fortran - such as scipy? No. It just affects the Fortran compiler (if any) used to build numpy. The only place this might affect you is if you use a LAPACK or BLAS that needs to be linked with a Fortran compiler. Generally, you don't have to specify anything. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] NumPy coverage via nose
On Fri, Jun 20, 2008 at 16:35, Alan McIntyre [EMAIL PROTECTED] wrote: Hi all, Nose has a plugin (included in the current version of nose) that uses Ned Batchelder's coverage module (http://nedbatchelder.com/code/modules/coverage.html) to provide coverage information. Is it acceptable to use this capability in nose to provide coverage reporting for NumPy? The coverage module is only needed if you actually want to check coverage. Specifically, what changes are you thinking of making? numpy.test(coverage=True) already triggers nose's coverage feature. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Specifying a fortran compiler on numpy build.
On Fri, Jun 20, 2008 at 4:38 PM, Robert Kern [EMAIL PROTECTED] wrote: No. It just affects the Fortran compiler (if any) used to build numpy. The only place this might affect you is if you use a LAPACK or BLAS that needs to be linked with a Fortran compiler. Generally, you don't have to specify anything. Thanks for the clarification. Cheers Adam ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] NumPy coverage via nose
On Fri, Jun 20, 2008 at 5:44 PM, Robert Kern [EMAIL PROTECTED] wrote: Specifically, what changes are you thinking of making? numpy.test(coverage=True) already triggers nose's coverage feature. Actually, that wasn't supposed to get added in my last checkin; I was experimenting with it and forgot to take it out. :/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Test framework changes
On Fri, Jun 20, 2008 at 5:35 PM, Robert Kern [EMAIL PROTECTED] wrote: So NoseTester.run() basically just calls nose.run(). That basically just instantiates nose.core.TestProgram and returns the .success attribute of it. Unfortunately, the TextTestResults object (a nose subclass of unittest._TextTestResults) gets created and discarded inside the nose.core.TestProgram.runTests() method. However, if you were to subclass it and override that method to store the TextTestResults to an attribute, you could return it from NoseTester.run(). Yep. I was hoping there was some built-in way to get to the details of the results via the nose API, but that doesn't appear to be something the nose developers considered. I'll probably go ahead and do as you suggested instead of making a temporary class to hold the result. ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] NumPy coverage via nose
On Fri, Jun 20, 2008 at 16:52, Alan McIntyre [EMAIL PROTECTED] wrote: On Fri, Jun 20, 2008 at 5:44 PM, Robert Kern [EMAIL PROTECTED] wrote: Specifically, what changes are you thinking of making? numpy.test(coverage=True) already triggers nose's coverage feature. Actually, that wasn't supposed to get added in my last checkin; I was experimenting with it and forgot to take it out. :/ It needs a little bit of work. For example, you have --cover-package=numpy hard-coded, but NumpyTest.test() will also be exposed as scipy.test(). But otherwise, yes, I think the functionality is useful. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Test framework changes
On Fri, Jun 20, 2008 at 3:04 PM, Alan McIntyre [EMAIL PROTECTED] wrote: On Fri, Jun 20, 2008 at 5:35 PM, Robert Kern [EMAIL PROTECTED] wrote: So NoseTester.run() basically just calls nose.run(). That basically just instantiates nose.core.TestProgram and returns the .success attribute of it. Unfortunately, the TextTestResults object (a nose subclass of unittest._TextTestResults) gets created and discarded inside the nose.core.TestProgram.runTests() method. However, if you were to subclass it and override that method to store the TextTestResults to an attribute, you could return it from NoseTester.run(). Yep. I was hoping there was some built-in way to get to the details of the results via the nose API, but that doesn't appear to be something the nose developers considered. I'll probably go ahead and do as you suggested instead of making a temporary class to hold the result. It may be worth bringing it up wtih the nose guys here: http://lists.idyll.org/listinfo/testing-in-python The nose author seems very responsive, and Titus Brown is on the list and cares a lot about numpy/scipy, and he may offer suggestions as well. Cheers, f ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Test framework changes
On Fri, Jun 20, 2008 at 7:22 PM, Fernando Perez [EMAIL PROTECTED] wrote: It may be worth bringing it up wtih the nose guys here: http://lists.idyll.org/listinfo/testing-in-python The nose author seems very responsive, and Titus Brown is on the list and cares a lot about numpy/scipy, and he may offer suggestions as well. Thanks, I'll do that. :) ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Nose, doctests and extension modules.
Howdy, I made some progress on the issue I mentioned earlier: http://lists.idyll.org/pipermail/testing-in-python/2008-June/000709.html As indicated there, a bug report has been filed with Python itself: http://bugs.python.org/issue3158 But in the meantime, Alan may want to apply the monkeypatch solution until the whole thing is correctly fixed upstream. I'm attaching the little cython example that contains the monkeypatch. If anyone starts writing cython code with docstrings, or putting proper doctests in hand-written extension modules (which is more annoying but can be done just as well), this fix will mean that the testing system can find them. Cheers, f primes.tgz Description: GNU Zip compressed data ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Deprecated ufuncs
What should we do about this: DeprecationWarning: complex divmod(), // and % are deprecated Note that these functions are not defined in numpy, rather complex numbers are promoted to python objects and the appropriate method/pyfunction is called. For fmod this returns an attribute error because this method is not defined for complex objects. For remainder (which has the same doc string as fmod), we get the deprecation warning when PyNumber_Remainder is called on the object. We also get occasional errors and segfaults depending on the arguments and their order. I think we have two options: define these functions for ourselves, or remove them. The latter requires removing objects from the available signatures because otherwise complex are automatically promoted to the next highest available type, objects -- a fact that partly accounts for the variety of promotion rules for ufuncs. The other option is to define them for ourselves. There is actually a third option, which is not to automatically promote up the chain of available types until a function is found. I rather like that myself because it makes explicit the operations available for the various types and, if we should happen to implement another type, the function signatures won't change. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] NotImplementedType should be an exception
Shouldn't this raise an NotImplementedError exception? In [7]: type(remainder(complex192(1), complex192(1))) Out[7]: type 'NotImplementedType' Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion