[Numpy-discussion] Patch for `numpy.doc`
Hi all, Please review http://codereview.appspot.com/2485 which adds `numpy.doc` as a way of documenting topics such as indexing and broadcasting. The corresponding trac ticket is http://scipy.org/scipy/numpy/ticket/846 Regards Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Another reference count leak: ticket #848
The attached patch fixes another reference count leak in the use of PyArray_DescrFromType. Could I ask that both this patch and my earlier one (ticket #843) be applied to subversion. Thank you. Definitely not enjoying this low level code. commit 80e1aca1725dd4cd8e091126cf515c39ac3a33ff Author: Michael Abbott [EMAIL PROTECTED] Date: Tue Jul 8 10:10:59 2008 +0100 Another reference leak using PyArray_DescrFromType This change fixes two issues: a spurious ADDREF on a typecode returned from PyArray_DescrFromType and a return path with no DECREF. diff --git a/numpy/core/src/scalartypes.inc.src b/numpy/core/src/scalartypes.inc.src index 3feefc0..772cf94 100644 --- a/numpy/core/src/scalartypes.inc.src +++ b/numpy/core/src/scalartypes.inc.src @@ -1886,7 +1886,6 @@ static PyObject * if (!PyArg_ParseTuple(args, |O, obj)) return NULL; typecode = PyArray_DescrFromType([EMAIL PROTECTED]@); -Py_INCREF(typecode); if (obj == NULL) { #if @default@ == 0 char *mem; @@ -1904,7 +1903,10 @@ static PyObject * } arr = PyArray_FromAny(obj, typecode, 0, 0, FORCECAST, NULL); -if ((arr==NULL) || (PyArray_NDIM(arr) 0)) return arr; +if ((arr==NULL) || (PyArray_NDIM(arr) 0)) { +Py_XDECREF(typecode); +return arr; +} robj = PyArray_Return((PyArrayObject *)arr); finish: ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Debian: numpy not building _dotblas.so
Hi numpy-devs, I was the one reporting the original bug about missing ATLAS support in the debian lenny python-numpy package. AFAICT the source python-numpy package in etch (numpy version 1.0.1) does not require atlas to build _dotblas.c, only lapack is needed. If you install the resulting binary package on a system where ATLAS is present, ATLAS libraries are used instead of plain lapack. So basically it was already working before the check for ATLAS was introduced into the numpy building system. Why should ATLAS now be required? It's not as trivial as just reverting that changeset, though. why is that? I mean, it was *working* before... thank you, tiziano ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Another reference count leak: ticket #848
On Tue, Jul 8, 2008 at 3:35 AM, Michael Abbott [EMAIL PROTECTED] wrote: The attached patch fixes another reference count leak in the use of PyArray_DescrFromType. Could I ask that both this patch and my earlier one (ticket #843) be applied to subversion. Thank you. I'll take a look at them. Definitely not enjoying this low level code. It can leave one a bit boggled, no? Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Debian: numpy not building _dotblas.so
On Tue, Jul 8, 2008 at 11:48 AM, Tiziano Zito [EMAIL PROTECTED] wrote: Hi numpy-devs, I was the one reporting the original bug about missing ATLAS support in the debian lenny python-numpy package. AFAICT the source python-numpy package in etch (numpy version 1.0.1) does not require atlas to build _dotblas.c, only lapack is needed. If you install the resulting binary package on a system where ATLAS is present, ATLAS libraries are used instead of plain lapack. So basically it was already working before the check for ATLAS was introduced into the numpy building system. Why should ATLAS now be required? It's not as trivial as just reverting that changeset, though. why is that? I mean, it was *working* before... So just removing the two lines from numpy seems to fix the problem in Debian. So far all tests seem to run both on i386 and amd64, both with and without atlas packages installed. And it is indeed faster with the altas packages instaled, yet it doesn't need them to build. I think that's what we want, no? Ondrej ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Numpy on AIX 5.3
Dnia poniedziałek 07 lipiec 2008, [EMAIL PROTECTED] napisał: File /home/marek/tmp/numpy-1.1.0/numpy/distutils/ccompiler.py, line 303, in CCompiler_cxx_compiler + cxx.linker_so[2:] TypeError: can only concatenate list (not str) to list Just by reading at the code, the line [cxx.linker_so[0]] + cxx.compiler_cxx[0] + cxx.linker_so[2:] Cannot work unless cxx.compiler_cxx is a nested list. Since AIX is not that common, it is well possible that this mistake was hidden for a long time. So I would first try something like: cxx.linker_so = [cxx.linker_so[0], cxx.compiler_cxx[0]] +cxx.linker_so[2:] Please apply also the above bugfix to trunk and numpy-1.1, i.e. change cxx.linker_so = [cxx.linker_so[0]] + cxx.compiler_cxx[0] + cxx.linker_so[2:] to cxx.linker_so = [cxx.linker_so[0], cxx.compiler_cxx[0]] + cxx.linker_so[2:] in line 303 of cccompiler.py in distutils. Greetings, -- Marek Wojciechowski ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Schedule for 1.1.1
I think we should try to get a quick bug fix version out by the end of the month. What do others think? Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Schedule for 1.1.1
2008/7/8 Charles R Harris [EMAIL PROTECTED]: I think we should try to get a quick bug fix version out by the end of the month. What do others think? That was the plan. We wanted a release before the conference -- early enough so that Enthought could push out an EPD release. We'd also like to include the latest docstrings to benefit the tutorials. Cheers Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Schedule for 1.1.1
2008/7/8 Stéfan van der Walt [EMAIL PROTECTED]: 2008/7/8 Charles R Harris [EMAIL PROTECTED]: I think we should try to get a quick bug fix version out by the end of the month. What do others think? That was the plan. We wanted a release before the conference -- early enough so that Enthought could push out an EPD release. We'd also like to include the latest docstrings to benefit the tutorials. Just to be clear, I don't think Enthought made any commitment to make an EPD release. It is more of a wish-list item, since so many people depend on the EPD for a fully-working tool-suite. We certainly need to get a release out with the latest docstrings, though. Regards Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Schedule for 1.1.1
On Tue, Jul 8, 2008 at 10:19 AM, Stéfan van der Walt [EMAIL PROTECTED] wrote: 2008/7/8 Charles R Harris [EMAIL PROTECTED]: I think we should try to get a quick bug fix version out by the end of the month. What do others think? That was the plan. We wanted a release before the conference -- early enough so that Enthought could push out an EPD release. We'd also like to include the latest docstrings to benefit the tutorials. So what is the schedule? I have 4 bugs to fix and will get to them this weekend. Should we put together a bug action list? Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Schedule for 1.1.1
On Tue, Jul 8, 2008 at 9:23 AM, Charles R Harris [EMAIL PROTECTED] wrote: So what is the schedule? I have 4 bugs to fix and will get to them this weekend. Should we put together a bug action list? Thanks for getting this conversation going. I have been meaning to send an email for some time now. I would like to get the 1.1.1 release out on 7/31/08, which gives us three weeks. I want to get this out soon because I would like to stick to the original plan to get the 1.2.0 release out on 8/31/2008. I will send an email about 1.2 out next, so if you want to comment on 1.2 please respond to that email. Here is a schedule for 1.1.1: - 7/20/08 tag the 1.1.1rc1 release and prepare packages - 7/27/08 tag the 1.1.1 release and prepare packages - 7/31/08 announce release Of course, this is assuming that there are no issues with the rc. This release should include only bug-fixes and possible improved documentation. Also, as a reminder, the trunk is for 1.2 development; so please remember that 1.1.1 will be tagged off the 1.1.x branch: svn co http://svn.scipy.org/svn/numpy/branches/1.1.xnumpy-1.1.x Please use the NumPy 1.1.1 milestone if you want to create a bug action list: http://scipy.org/scipy/numpy/milestone/1.1.1 Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Another reference count leak: ticket #848
Michael Abbott wrote: The attached patch fixes another reference count leak in the use of PyArray_DescrFromType. The first part of this patch is good. The second is not needed. Also, it would be useful if you could write a test case that shows what is leaking and how you determined that it is leaking. Could I ask that both this patch and my earlier one (ticket #843) be applied to subversion. Thank you. Definitely not enjoying this low level code. What doesn't kill you makes you stronger :-) But, you are correct that reference counting is a bear. -Travis ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] chararray behavior
Alan McIntyre wrote: Since chararray doesn't currently have any tests, I'm writing some, and I ran across a couple of things that didn't make sense to me: 1. The code for __mul__ is exactly the same as that for __rmul__; is there any reason __rmul__ shouldn't just call __mul__? Just additional function call overhead, but it's probably fine to just call __mul__. 1.5. __radd__ seems like it doesn't do anything fundamentally different from __add__, is there a reason to have a separate implementation of __radd__? Possibly. I'm not sure. 2. The behavior of __mul__ seems odd: What is odd about this? It is patterned after 'a' * 3 'a' * 4 'a' * 5 for regular python strings. -Travis ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Another reference count leak: ticket #848
Hi Travis, On Tue, Jul 8, 2008 at 11:26 AM, Travis E. Oliphant [EMAIL PROTECTED] wrote: Michael Abbott wrote: The attached patch fixes another reference count leak in the use of PyArray_DescrFromType. The first part of this patch is good. The second is not needed. Also, it would be useful if you could write a test case that shows what is leaking and how you determined that it is leaking. Could I ask that both this patch and my earlier one (ticket #843) be applied to subversion. Thank you. Definitely not enjoying this low level code. What doesn't kill you makes you stronger :-) But, you are correct that reference counting is a bear. Could you backport your fixes to 1.1.x also? Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] chararray behavior
On Tue, Jul 8, 2008 at 1:29 PM, Travis E. Oliphant [EMAIL PROTECTED] wrote: Alan McIntyre wrote: Since chararray doesn't currently have any tests, I'm writing some, and I ran across a couple of things that didn't make sense to me: 1. The code for __mul__ is exactly the same as that for __rmul__; is there any reason __rmul__ shouldn't just call __mul__? Just additional function call overhead, but it's probably fine to just call __mul__. 1.5. __radd__ seems like it doesn't do anything fundamentally different from __add__, is there a reason to have a separate implementation of __radd__? Possibly. I'm not sure. I'll probably leave them alone; I was just curious, mostly. 2. The behavior of __mul__ seems odd: What is odd about this? It is patterned after 'a' * 3 'a' * 4 'a' * 5 for regular python strings. That's what I would have expected, but for N = 4, Q*N is the same as Q*4. ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] alterdot and restoredot
I don't know what to write for a doc string for alterdot and restoredot. Any ideas? ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] numpy with fftw
Hi, I want to compile numpy so that it use the optimized fftw librairy. In the site.cfg.example their is a section [fftw3] that I fill with: [fftw3] include_dirs = /usr/include library_dirs = /usr/lib64 fftw3_libs = fftw3, fftw3f fftw3_opt_libs = fftw3_threads, fftw3f_threads when I compile it, their is no message that it found it like for the atlas librairy(optimized blas librairy). numpy.show_config() don't tell which version of fft it use. How can I know it was well compiled? I tried comparing the speed of the numpy version in FC9 and the one I built, but they have the same speed. Here is the code I used to do my timing: time python -c import numpy.fft;a=numpy.random.rand(3); for i in xrange(1): numpy.fft.fft(a); numpy.show_config() thanks for your time Frédéric Bastien ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Debian: numpy not building _dotblas.so
On Tue, Jul 8, 2008 at 08:06, Ondrej Certik [EMAIL PROTECTED] wrote: On Tue, Jul 8, 2008 at 11:48 AM, Tiziano Zito [EMAIL PROTECTED] wrote: Hi numpy-devs, I was the one reporting the original bug about missing ATLAS support in the debian lenny python-numpy package. AFAICT the source python-numpy package in etch (numpy version 1.0.1) does not require atlas to build _dotblas.c, only lapack is needed. If you install the resulting binary package on a system where ATLAS is present, ATLAS libraries are used instead of plain lapack. So basically it was already working before the check for ATLAS was introduced into the numpy building system. Why should ATLAS now be required? It's not as trivial as just reverting that changeset, though. why is that? I mean, it was *working* before... So just removing the two lines from numpy seems to fix the problem in Debian. So far all tests seem to run both on i386 and amd64, both with and without atlas packages installed. And it is indeed faster with the altas packages instaled, yet it doesn't need them to build. I think that's what we want, no? Can you give me more details? Was the binary built on a machine with an absent ATLAS? Show me the output of ldd on _dotblas.so with both ATLAS installed and not. Can you import numpy.core._dotblas explicitly under both? -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy with fftw
On Tue, Jul 8, 2008 at 14:14, Frédéric Bastien [EMAIL PROTECTED] wrote: Hi, I want to compile numpy so that it use the optimized fftw librairy. numpy itself does not support this. scipy does. In the site.cfg.example their is a section [fftw3] that I fill with: [fftw3] include_dirs = /usr/include library_dirs = /usr/lib64 fftw3_libs = fftw3, fftw3f fftw3_opt_libs = fftw3_threads, fftw3f_threads This is for scipy. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] chararray behavior
2008/7/8 Alan McIntyre [EMAIL PROTECTED]: On Tue, Jul 8, 2008 at 1:29 PM, Travis E. Oliphant [EMAIL PROTECTED] wrote: Alan McIntyre wrote: 2. The behavior of __mul__ seems odd: What is odd about this? It is patterned after 'a' * 3 'a' * 4 'a' * 5 for regular python strings. That's what I would have expected, but for N = 4, Q*N is the same as Q*4. In particular, the returned type is always string of length four, which is very peculiar - why four? I realize that variable-length strings are a problem (object arrays, I guess?), as is returning arrays of varying dtypes (strings of length N), but this definitely violates the principle of least surprise... Anne ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy with fftw
thanks for the information. What made my thought it was possible is that in the file site.cfg.example their is: # Given only this section, numpy.distutils will try to figure out which version # of FFTW you are using. #[fftw] #libraries = fftw3 Is this fftw section still usefull? Frédéric Bastien On Tue, Jul 8, 2008 at 3:16 PM, Robert Kern [EMAIL PROTECTED] wrote: On Tue, Jul 8, 2008 at 14:14, Frédéric Bastien [EMAIL PROTECTED] wrote: Hi, I want to compile numpy so that it use the optimized fftw librairy. numpy itself does not support this. scipy does. In the site.cfg.example their is a section [fftw3] that I fill with: [fftw3] include_dirs = /usr/include library_dirs = /usr/lib64 fftw3_libs = fftw3, fftw3f fftw3_opt_libs = fftw3_threads, fftw3f_threads This is for scipy. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy with fftw
On Tue, Jul 8, 2008 at 14:33, Frédéric Bastien [EMAIL PROTECTED] wrote: thanks for the information. What made my thought it was possible is that in the file site.cfg.example their is: # Given only this section, numpy.distutils will try to figure out which version # of FFTW you are using. #[fftw] #libraries = fftw3 Is this fftw section still usefull? Yes, for building scipy. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Debian: numpy not building _dotblas.so
On Tue, Jul 8, 2008 at 9:15 PM, Robert Kern [EMAIL PROTECTED] wrote: On Tue, Jul 8, 2008 at 08:06, Ondrej Certik [EMAIL PROTECTED] wrote: On Tue, Jul 8, 2008 at 11:48 AM, Tiziano Zito [EMAIL PROTECTED] wrote: Hi numpy-devs, I was the one reporting the original bug about missing ATLAS support in the debian lenny python-numpy package. AFAICT the source python-numpy package in etch (numpy version 1.0.1) does not require atlas to build _dotblas.c, only lapack is needed. If you install the resulting binary package on a system where ATLAS is present, ATLAS libraries are used instead of plain lapack. So basically it was already working before the check for ATLAS was introduced into the numpy building system. Why should ATLAS now be required? It's not as trivial as just reverting that changeset, though. why is that? I mean, it was *working* before... So just removing the two lines from numpy seems to fix the problem in Debian. So far all tests seem to run both on i386 and amd64, both with and without atlas packages installed. And it is indeed faster with the altas packages instaled, yet it doesn't need them to build. I think that's what we want, no? Can you give me more details? Sure. :) Was the binary built on a machine with an absent ATLAS? Yes, the binary is always built on a machine with an absent atlas, as the package is build-conflicting with atlas. Show me the output of ldd on _dotblas.so with both ATLAS installed and not. Can you import numpy.core._dotblas explicitly under both? ATLAS installed: [EMAIL PROTECTED]:~/debian$ ldd /usr/lib/python2.5/site-packages/numpy/core/_dotblas.so linux-gate.so.1 = (0xb7fba000) libblas.so.3gf = /usr/lib/atlas/libblas.so.3gf (0xb7c19000) libgfortran.so.3 = /usr/lib/libgfortran.so.3 (0xb7b67000) libm.so.6 = /lib/i686/cmov/libm.so.6 (0xb7b4) libgcc_s.so.1 = /lib/libgcc_s.so.1 (0xb7b33000) libc.so.6 = /lib/i686/cmov/libc.so.6 (0xb79d8000) /lib/ld-linux.so.2 (0xb7fbb000) [EMAIL PROTECTED]:~/debian$ python Python 2.5.2 (r252:60911, Jun 25 2008, 17:58:32) [GCC 4.3.1] on linux2 Type help, copyright, credits or license for more information. import numpy.core._dotblas ATLAS not installed: [EMAIL PROTECTED]:~/debian$ ldd /usr/lib/python2.5/site-packages/numpy/core/_dotblas.so linux-gate.so.1 = (0xb7f2f000) libblas.so.3gf = /usr/lib/libblas.so.3gf (0xb7e82000) libgfortran.so.3 = /usr/lib/libgfortran.so.3 (0xb7dd) libm.so.6 = /lib/i686/cmov/libm.so.6 (0xb7da9000) libgcc_s.so.1 = /lib/libgcc_s.so.1 (0xb7d9c000) libc.so.6 = /lib/i686/cmov/libc.so.6 (0xb7c41000) /lib/ld-linux.so.2 (0xb7f3) [EMAIL PROTECTED]:~/debian$ python Python 2.5.2 (r252:60911, Jun 25 2008, 17:58:32) [GCC 4.3.1] on linux2 Type help, copyright, credits or license for more information. import numpy.core._dotblas Ondrej ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Debian: numpy not building _dotblas.so
On Tue, Jul 8, 2008 at 14:47, Ondrej Certik [EMAIL PROTECTED] wrote: On Tue, Jul 8, 2008 at 9:15 PM, Robert Kern [EMAIL PROTECTED] wrote: On Tue, Jul 8, 2008 at 08:06, Ondrej Certik [EMAIL PROTECTED] wrote: On Tue, Jul 8, 2008 at 11:48 AM, Tiziano Zito [EMAIL PROTECTED] wrote: Hi numpy-devs, I was the one reporting the original bug about missing ATLAS support in the debian lenny python-numpy package. AFAICT the source python-numpy package in etch (numpy version 1.0.1) does not require atlas to build _dotblas.c, only lapack is needed. If you install the resulting binary package on a system where ATLAS is present, ATLAS libraries are used instead of plain lapack. So basically it was already working before the check for ATLAS was introduced into the numpy building system. Why should ATLAS now be required? It's not as trivial as just reverting that changeset, though. why is that? I mean, it was *working* before... So just removing the two lines from numpy seems to fix the problem in Debian. So far all tests seem to run both on i386 and amd64, both with and without atlas packages installed. And it is indeed faster with the altas packages instaled, yet it doesn't need them to build. I think that's what we want, no? Can you give me more details? Sure. :) Was the binary built on a machine with an absent ATLAS? Yes, the binary is always built on a machine with an absent atlas, as the package is build-conflicting with atlas. Show me the output of ldd on _dotblas.so with both ATLAS installed and not. Can you import numpy.core._dotblas explicitly under both? ATLAS installed: [EMAIL PROTECTED]:~/debian$ ldd /usr/lib/python2.5/site-packages/numpy/core/_dotblas.so linux-gate.so.1 = (0xb7fba000) libblas.so.3gf = /usr/lib/atlas/libblas.so.3gf (0xb7c19000) libgfortran.so.3 = /usr/lib/libgfortran.so.3 (0xb7b67000) libm.so.6 = /lib/i686/cmov/libm.so.6 (0xb7b4) libgcc_s.so.1 = /lib/libgcc_s.so.1 (0xb7b33000) libc.so.6 = /lib/i686/cmov/libc.so.6 (0xb79d8000) /lib/ld-linux.so.2 (0xb7fbb000) [EMAIL PROTECTED]:~/debian$ python Python 2.5.2 (r252:60911, Jun 25 2008, 17:58:32) [GCC 4.3.1] on linux2 Type help, copyright, credits or license for more information. import numpy.core._dotblas ATLAS not installed: [EMAIL PROTECTED]:~/debian$ ldd /usr/lib/python2.5/site-packages/numpy/core/_dotblas.so linux-gate.so.1 = (0xb7f2f000) libblas.so.3gf = /usr/lib/libblas.so.3gf (0xb7e82000) libgfortran.so.3 = /usr/lib/libgfortran.so.3 (0xb7dd) libm.so.6 = /lib/i686/cmov/libm.so.6 (0xb7da9000) libgcc_s.so.1 = /lib/libgcc_s.so.1 (0xb7d9c000) libc.so.6 = /lib/i686/cmov/libc.so.6 (0xb7c41000) /lib/ld-linux.so.2 (0xb7f3) [EMAIL PROTECTED]:~/debian$ python Python 2.5.2 (r252:60911, Jun 25 2008, 17:58:32) [GCC 4.3.1] on linux2 Type help, copyright, credits or license for more information. import numpy.core._dotblas Okay, it turns out that libblas on Ubuntu (and I'm guessing Debian) includes the CBLAS interface. $ nm /usr/lib/libblas.a | grep T cblas_ T cblas_caxpy T cblas_ccopy ... This is specific to Debian and its derivatives. Not all libblas's have this. So I stand by my statement that just reverting the change is not acceptable. We need a real check for the CBLAS interface. In the meantime, the Debian package maintainer can patch the file to remove that check during the build for Debian systems. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Debian: numpy not building _dotblas.so
On Tue, Jul 8, 2008 at 10:19 PM, Robert Kern [EMAIL PROTECTED] wrote: On Tue, Jul 8, 2008 at 14:47, Ondrej Certik [EMAIL PROTECTED] wrote: On Tue, Jul 8, 2008 at 9:15 PM, Robert Kern [EMAIL PROTECTED] wrote: On Tue, Jul 8, 2008 at 08:06, Ondrej Certik [EMAIL PROTECTED] wrote: On Tue, Jul 8, 2008 at 11:48 AM, Tiziano Zito [EMAIL PROTECTED] wrote: Hi numpy-devs, I was the one reporting the original bug about missing ATLAS support in the debian lenny python-numpy package. AFAICT the source python-numpy package in etch (numpy version 1.0.1) does not require atlas to build _dotblas.c, only lapack is needed. If you install the resulting binary package on a system where ATLAS is present, ATLAS libraries are used instead of plain lapack. So basically it was already working before the check for ATLAS was introduced into the numpy building system. Why should ATLAS now be required? It's not as trivial as just reverting that changeset, though. why is that? I mean, it was *working* before... So just removing the two lines from numpy seems to fix the problem in Debian. So far all tests seem to run both on i386 and amd64, both with and without atlas packages installed. And it is indeed faster with the altas packages instaled, yet it doesn't need them to build. I think that's what we want, no? Can you give me more details? Sure. :) Was the binary built on a machine with an absent ATLAS? Yes, the binary is always built on a machine with an absent atlas, as the package is build-conflicting with atlas. Show me the output of ldd on _dotblas.so with both ATLAS installed and not. Can you import numpy.core._dotblas explicitly under both? ATLAS installed: [EMAIL PROTECTED]:~/debian$ ldd /usr/lib/python2.5/site-packages/numpy/core/_dotblas.so linux-gate.so.1 = (0xb7fba000) libblas.so.3gf = /usr/lib/atlas/libblas.so.3gf (0xb7c19000) libgfortran.so.3 = /usr/lib/libgfortran.so.3 (0xb7b67000) libm.so.6 = /lib/i686/cmov/libm.so.6 (0xb7b4) libgcc_s.so.1 = /lib/libgcc_s.so.1 (0xb7b33000) libc.so.6 = /lib/i686/cmov/libc.so.6 (0xb79d8000) /lib/ld-linux.so.2 (0xb7fbb000) [EMAIL PROTECTED]:~/debian$ python Python 2.5.2 (r252:60911, Jun 25 2008, 17:58:32) [GCC 4.3.1] on linux2 Type help, copyright, credits or license for more information. import numpy.core._dotblas ATLAS not installed: [EMAIL PROTECTED]:~/debian$ ldd /usr/lib/python2.5/site-packages/numpy/core/_dotblas.so linux-gate.so.1 = (0xb7f2f000) libblas.so.3gf = /usr/lib/libblas.so.3gf (0xb7e82000) libgfortran.so.3 = /usr/lib/libgfortran.so.3 (0xb7dd) libm.so.6 = /lib/i686/cmov/libm.so.6 (0xb7da9000) libgcc_s.so.1 = /lib/libgcc_s.so.1 (0xb7d9c000) libc.so.6 = /lib/i686/cmov/libc.so.6 (0xb7c41000) /lib/ld-linux.so.2 (0xb7f3) [EMAIL PROTECTED]:~/debian$ python Python 2.5.2 (r252:60911, Jun 25 2008, 17:58:32) [GCC 4.3.1] on linux2 Type help, copyright, credits or license for more information. import numpy.core._dotblas Okay, it turns out that libblas on Ubuntu (and I'm guessing Debian) includes the CBLAS interface. $ nm /usr/lib/libblas.a | grep T cblas_ T cblas_caxpy T cblas_ccopy ... This is specific to Debian and its derivatives. Not all libblas's have this. So I stand by my statement that just reverting the change is not acceptable. We need a real check for the CBLAS interface. In the Right. meantime, the Debian package maintainer can patch the file to remove that check during the build for Debian systems. Yes, I just did that. Thanks for the clarification. Ondrej ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Another reference count leak: ticket #848
On Tue, Jul 8, 2008 at 16:23, Michael Abbott [EMAIL PROTECTED] wrote: On Tue, 8 Jul 2008, Travis E. Oliphant wrote: Michael Abbott wrote: The attached patch fixes another reference count leak in the use of PyArray_DescrFromType. The first part of this patch is good. The second is not needed. I don't see that. The second part of the patch addresses the case of an early return: this means that the DECREF that occurs later on in the code is bypassed, and so a reference leak will still occur if this early return case occurs. Don't forget that PyArray_DescrFromType returns an incremented reference that has to be decremented, returned or explicitly assigned -- the DECREF obligation has to be met somewhere. Also, it would be useful if you could write a test case that shows what is leaking and how you determined that it is leaking. Roughly r = range(n) i = 0 refs = 0 refs = sys.gettotalrefcount() for i in r: float32() print refs - sys.gettotalrefcount() in debug mode python. This isn't quite the whole story (reference counts can be annoyingly fluid), but that's the most of it. In trunk this leaks 2 refs per n, with the attached patch there remains one leak I haven't chased down yet. Is there a framework for writing test cases? I'm constructing tests just to pin down leaks that I find in my application (uses numpy and ctypes extensively), so they're terribly ad-hoc at the moment. If you can measure the leak in-process with sys.getrefcount() and friends on a standard non-debug build of Python, then it might be useful for you to write a unit test for us. For example, in numpy/core/tests/test_regression.py, you can see several tests involving reference counts. If the leak isn't particularly measurable without resorting to top(1), then don't bother trying to make a unit test, but a small, complete example that demonstrates the leak is very useful for the rest of us to see if the problem exists on our systems and so we can try our hands at fixing it, too. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Another reference count leak: ticket #848
On Tue, Jul 8, 2008 at 3:23 PM, Michael Abbott [EMAIL PROTECTED] wrote: On Tue, 8 Jul 2008, Travis E. Oliphant wrote: Michael Abbott wrote: The attached patch fixes another reference count leak in the use of PyArray_DescrFromType. The first part of this patch is good. The second is not needed. I don't see that. The second part of the patch addresses the case of an early return: this means that the DECREF that occurs later on in the code is bypassed, and so a reference leak will still occur if this early return case occurs. Don't forget that PyArray_DescrFromType returns an incremented reference that has to be decremented, returned or explicitly assigned -- the DECREF obligation has to be met somewhere. Some function calls do the DECREF on an error return. I haven't looked, but that might be the case here. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Another reference count leak: ticket #848
Michael Abbott wrote: On Tue, 8 Jul 2008, Travis E. Oliphant wrote: Michael Abbott wrote: The attached patch fixes another reference count leak in the use of PyArray_DescrFromType. The first part of this patch is good. The second is not needed. I don't see that. The second part of the patch addresses the case of an early return: this means that the DECREF that occurs later on in the code is bypassed, and so a reference leak will still occur if this early return case occurs. Don't forget that PyArray_DescrFromType returns an incremented reference that has to be decremented, returned or explicitly assigned -- the DECREF obligation has to be met somewhere. Don't forget that PyArray_FromAny consumes the reference even if it returns with an error. -Travis ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Schedule for 1.1.1
Hi, I haven't checked out a recent numpy (( N.__version__ '1.0.3.1')) But could someone please check if the division has been changed from '/' to '//' in these places: C:\Priithon_25_win\numpy\core\numerictypes.py:142: DeprecationWarning: classic int division bytes = bits / 8 C:\Priithon_25_win\numpy\core\numerictypes.py:182: DeprecationWarning: classic int division na_name = '%s%d' % (base.capitalize(), bit/2) C:\Priithon_25_win\numpy\core\numerictypes.py:212: DeprecationWarning: classic int division charname = 'i%d' % (bits/8,) C:\Priithon_25_win\numpy\core\numerictypes.py:213: DeprecationWarning: classic int division ucharname = 'u%d' % (bits/8,) C:\Priithon_25_win\numpy\core\numerictypes.py:409: DeprecationWarning: classic int division nbytes[obj] = val[2] / 8 I found these by starting python using the -Qwarn option. Thanks, Sebastian Haase On Tue, Jul 8, 2008 at 5:28 PM, Charles R Harris [EMAIL PROTECTED] wrote: I think we should try to get a quick bug fix version out by the end of the month. What do others think? Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] alterdot and restoredot
On Tue, Jul 8, 2008 at 14:01, Keith Goodman [EMAIL PROTECTED] wrote: I don't know what to write for a doc string for alterdot and restoredot. Then maybe you're the best one to figure it out. What details do you think are missing from the current docstrings? What questions do they leave you with? -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy installation issues
On Mon, Jul 7, 2008 at 3:11 PM, Chris Bartels [EMAIL PROTECTED] wrote: Hi David (and others) This issue is known: http://www.scipy.org/scipy/numpy/ticket/811 I think this is an issue for the numpy developers. (I don't know how to fix this easily, i can try to install an older version of binutils (if cygwin has these), but this will probably break a lot of other stuff. So that is not my preferred solution.) Well, AFAIK, numpy does not have a single line in assembly, so this looks more like a bug in the cygwin packaging (incompatibilities between gcc and binutiles versions). There is not much we can do about it. cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] chararray behavior
On Tue, Jul 8, 2008 at 3:30 PM, Anne Archibald [EMAIL PROTECTED] wrote: In particular, the returned type is always string of length four, which is very peculiar - why four? I realize that variable-length strings are a problem (object arrays, I guess?), as is returning arrays of varying dtypes (strings of length N), but this definitely violates the principle of least surprise... Hmm..__mul__ calculates the required size of the result array, but the result of the calculation is a numpy.int32. So ndarray__new__ is given this int32 as the itemsize argument, and it looks like the itemsize of the argument (rather than its contained value) is used as the itemsize of the new array: np.chararray((1,2), itemsize=5) chararray([[';f', '\x00\x00\x00@']], dtype='|S5') np.chararray((1,2), itemsize=np.int32(5)) chararray([['{5', '']], dtype='|S4') np.chararray((1,2), itemsize=np.int16(5)) chararray([['{5', '']], dtype='|S2') Is this expected behavior? I can fix this particular case by forcing the calculated size to be a Python int, but this treatment of the itemsize argument seems like it might be an easy way to cause subtle bugs. ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion