What does NOT work
==
One important target missing is windows 64, but this should not be too
difficult to solve.
There are still many corner cases not yet solved (in particular some
windows things, most libraries path cannot yet be overriden in
site.cfg); also, I do not
in PyObject_Call (func=0x9609934, arg=0x95f260c,
kw=0x8061a66) at Objects/abstract.c:1860
Seems that the bug could be somewhere in the handling of the dot blas module
?
Matthieu
2007/10/15, Travis E. Oliphant [EMAIL PROTECTED]:
Matthieu Brucher wrote:
The problem is that there is a data-type
what I'm searching for is :
In [18]: dotprod2(a,b)
Out[18]: array([ 0.28354876, 0.54474092, 0.22986942, 0.42822669,
0.98179793])
where I defined a classical (in the way I understand it. I may not
understand it properly ?) dot product between these 2 vectors.
def dotprod2(a,b):
There are two types of errors that can occur with reference counting on
data-types.
1) There are too many DECREF's --- this gets us to the error quickly and
is usually easy to reproduce
2) There are too many INCREF's (the reference count keeps going up until
the internal counter wraps
Can't you emulate this behaviour with signals different than images ?
(say random signal of 64*64*64*3 samples).
I wish I could, but this behaviour only shows up on this peculiar data set
:(
If the process does not
require a long processing time (say a couple of minutes), then you may
be
Oups, I'm wrong, the process happened this time with the 128² images and not
the 3D images. I'll try to check a little bit further.
Matthieu
2007/10/17, Matthieu Brucher [EMAIL PROTECTED]:
Can't you emulate this behaviour with signals different than images ?
(say random signal of 64*64*64*3
I wish I could, but this behaviour only shows up on this peculiar data
set :(
It always happens at the same time ?
Yes.
Perhaps debugging the process will sort things out ?
Unfortunately, the process is very long, there are several
optimizations in the process, the whole thing in a
As said, my approach to debugging this kind of thing is to get out of
python ASAP. And once you manage to reproduce the result when calling
only a couple of python functions, then you use massif or memcheck.
I agree with you, the problem is that I do not use directly C functions and
that I
I agree with you, the problem is that I do not use directly C
functions and that I do not know how I can reproduce the result with a
minimal example.
Yes, that's why I suggested looking at the memory usage (using top or
something else). Because maybe the problem can be spotted long before
I'll try it this night (because it is very very long, so with the
simulator...)
Yes, that's the worse cases, of course. For those cases, I wish we could
use numpy with COUNT_ALLOCS. Unfortunately, using numpy in this case is
impossible (it crashes at import, and I've never tracked down the
The problem is I don't have time for this at the moment, I have to
develop
my algorithm for my PhD, and if one does not work, I'll try another one,
but
this is strange because this algorithm worked in the past...
By incentive, I meant incentive for me, not for you :) I think this
could
Hi
I keep on getting this error :
*** Reference count error detected:
an attempt was made to deallocate 7 (l) ***
It happens on numpy calls (multiplications, call to inner(), ...), but I
didn't find the real reason. I'm using Numpy '1.0.4.dev3875' with Python
2.5.1.
Someone has a hint to solve
The problem is that there is a data-type reference counting error some
where that is attempting to deallocate the built-in data-type 'l'
That's what I supposed, but I couldn't find the reason why it wanted to do
this
It's not really a Python error but a logging. The code won't let you
Hi,
In the description field, you have itemsize which is what you want.
Matthieu
2007/10/14, Yves Revaz [EMAIL PROTECTED]:
Dear list,
I'm translating codes from numarray to numpy.
Unfortunately, I'm unable to find the equivalent of the command
that give the number of bytes for a given
2007/10/12, Alan G Isaac [EMAIL PROTECTED]:
On Fri, 12 Oct 2007, Matthieu Brucher apparently wrote:
I'm trying to understand (but perhaps everything is in the
numpy book in which case I'd rather buy the book
immediately) how to use the PyArray_FromAny() function.
This function
I don't see the text string '__array' anywhere in this book.
I saw something a subchapter with the PyArrayInterface, nothing around it
with __array__ or __array_struct__ ?
Matthieu (wating for the book :))
___
Numpy-discussion mailing list
Hi,
I'm trying to understand (but perhaps everything is in the numpy book in
which case I'd rather buy the book immediately) how to use the
PyArray_FromAny() function.
Suppose I have a C object (created by Pyrex or SWIG or Boost.Python) that
has the __array_struct__ attribute. Can I pass this
I don't what he meant by a broken libc, if it is the fact that there
is a lot of deprecated standard functions, I don't call it broken
(besides, this deprecation follows a technical paper that describe the
new safe functions, although it does not deprecate these functions).
If
Hi,
I'm trying to cast a float array into a string array (for instance
transforming [[2., 3.], [4., 5.]] into [['2.', '3.'], ['4.', '5.']]), I
tried with astype(str) and every variation (str_, string, string_, string0),
but not luck.
Is there a function or a method of the array class that can
', '5']],
dtype='|S1')
and if you want a list:
In [5]: x.astype(str).tolist()
Out[5]: [['2', '3'], ['4', '5']]
L.
On 10/5/07, Matthieu Brucher [EMAIL PROTECTED] wrote:
Hi,
I'm trying to cast a float array into a string array (for instance
transforming [[2., 3.], [4., 5
([['-2.0', '3.0'],
['4.0', '5.0']],
dtype='|S10')
L.
On 10/5/07, Matthieu Brucher [EMAIL PROTECTED] wrote:
I'd like to have the '2.', because if the number is negative, only '-'
is returned, not the real value.
Matthieu
2007/10/5, lorenzo bolla [EMAIL PROTECTED
It should have worked with the first solution. Did you try trunk, to
see if it works ?
It does not seem to work with only trunk.
This should have worked if the buildbot was setup to work with branches (or
even tags, for that matter).
I don't know if this is relevant, but in the html
There are some things I am not sure about :
- how to build python extension with it: this is of course mandatory
We use Scons at the labs for the next version of the tool we use, and it is
very simple to buil extensions, at least SWIG ones, for Python 2.5 on
Windows, there is the need of
less than what? std:valarray, etc. all help with this.
I do not agree with this statement. A correct memory managed array would
increment and decrement a reference counter somewhere.
Yes, it sure would be nice to build it on an existing code base, and
boost::multiarray seems to fit.
The
OK, so you've now got a view of the data from the valarray. Nice to know
this works, but, of course, fragile if the valarray is re-sized or
anything, so it probably won't work for us.
Unless you use a special allocator/desallocator (I don't know if the latter
is possible), I don't know how
2007/9/5, Christopher Barker [EMAIL PROTECTED]:
Matthieu Brucher wrote:
Blitz++ is more or less avandoned. It uses indexes than can be
not-portable between 32bits platforms and 64bits ones.
Oh well -- that seems remarkably short sited, but would I have done
better?
Well, it's too bad
2007/9/5, Robert Dailey [EMAIL PROTECTED]:
Hi,
I have two questions:
1) Is there any way in numpy to represent vectors? Currently I'm using
'array' for vectors.
A vector is an array with one dimension, it's OK. You could use a matrix of
dimension 1xn or nx1 as well.
2) Is there a way
2007/9/5, Robert Dailey [EMAIL PROTECTED]:
Thanks for your response.
I was not able to find len() in the numpy documentation at the following
link:
http://www.scipy.org/doc/numpy_api_docs/namespace_index.html
Perhaps I'm looking in the wrong location?
Yes, it's a Python function ;)
He goes on to suggest that Blitz++ might have more of a future. (though
it looks like he's involved with the Boost project now)
Blitz++ is more or less avandoned. It uses indexes than can be not-portable
between 32bits platforms and 64bits ones.
Is there another alternative? At the moment,
Thank you for the answer
The svn version of test() function now returns TestResult object.
Numpy 1.3.x does not provide this ? I can't upgrade the numpy packages on
the Linux boxes (on the Windows box, I suppose that I could use an Enthought
egg).
So, test() calls in buildbot should read:
from numpy import *
from numpy.linalg import *
linalg is in the numpy module
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
2007/8/21, Vivian Tini [EMAIL PROTECTED]:
Dear All,
I am trying to install the package NumPy-1.0.3 on Linux x86_64 with Python
version 2.4.2 and after using the standard installation command :
python setup.py install
I received the following error message:
error: could not create
Hi,
I wondered if there was a way of returning another error code than 0 when
executing the test suite so that a parent process can immediately know if
all the tests passed or not.
The numpy buildbot seems to have the same behaviour BTW.
I don't know if it is possible, but it would be great.
2007/8/24, mark [EMAIL PROTECTED]:
There may be multiple nan-s, but what Chris did is simply create one
with the same nan's
a = N.array((1,2,3,N.nan))
b = N.array((1,2,3,N.nan))
I think these should be the same.
Can anybody give me a good reason why they shouldn't, because it could
I now get the feeling the delete command needs to copy the entire
array with exception of the deleted item. I guess this is a hard thing
to do efficiently?
Well, if you don't copy the array, the value will always remain present.
Matthieu
___
The where function ?
Matthieu
2007/8/15, mark [EMAIL PROTECTED]:
Oops, 'find' is in pylab (matplotlib).
I guess in numpy you have to use 'where', which does almost the same,
but it returns a Tuple.
Is there a function that is more like the find in matplotlib?
Mark
On Aug 15, 12:26 pm,
My 64 bit linux on Intel aligns arrays, whatever the data type, on 16 byte
boundaries. It might be interesting to see what happens with the Intel and
MSVC comipilers, but I expect similar results.
According to the doc on the msdn, the data should be 16-bits aligned.
Matthieu
For platforms without posix_memalign, I don't see how to
implement a memory allocator with an arbitrary alignment (more
precisely, I don't see how to free it if I cannot assume a fixed
alignement: how do I know where the real pointer is ?).
Visual Studio seems to offer a counter part (also
You can try using hist() with the correct range and number of bins.
Matthieu
2007/8/7, Nils Wagner [EMAIL PROTECTED]:
Hi all,
I have a list of integer numbers. The entries can vary between 0 and 19.
How can I count the occurrence of any number. Consider
data
[9, 6, 9, 6, 7, 9, 9, 10, 7,
MKL is from Intel (free as in beer on Linux and for academic purpose I
think, but of course, you should check whether this applies to you).
AFAIK, the MKL is free for non-commercial purposes under Linux only, and
there is a special license for academics.
Matthieu
Hi,
I think you should look into scipy.ndimage which has minimum_filter and
maximum_filter
Matthieu
2007/7/24, Ludwig M Brinckmann [EMAIL PROTECTED]:
Hi there,
I have a large array, lets say 4 * 512, which I need to downsample by
a factor of 4 in the y direction, by factor 3 in the x
Hi,
Did you try ravel() instead ? If a copy is not needed, it returns a 1D view
of the array.
Matthieu
2007/7/18, Tom Goddard [EMAIL PROTECTED]:
Does taking a slice of a flatiter always make a copy? That appears to
be the behaviour in numpy 1.0.3.
For example a.flat[1:3][0] = 5 does not
Hi,
The simplest way of doing this is with ctypes :
http://scipy.org/Cookbook/Ctypes
Matthieu
2007/7/24, computer_guy [EMAIL PROTECTED]:
Hi Everyone,
I am going to write some external C functions that takes in numpy
arrays as parameters and return numpy arrays. I have the following
As for my unrelated question, I was still wondering if anyone has any
information about the relative merits of MKL vs ATLAS etc.
MKL is parallized, so until ATLAS is as well, MKL has the upper hand.
Matthieu
___
Numpy-discussion mailing list
Hi,
Did you try numpy.dot(sqrtm(a), sqrtm(a)) ?
Matthieu
2007/7/16, Kevin Jacobs [EMAIL PROTECTED] [EMAIL PROTECTED]:
Hi all,
This is a bit of a SciPy question, but I thought I'd ask here since I'm
already subscribed. I'd like to add some new LAPACK bindings to SciPy and
was wondering if
Oups, sorry, I missed the 'a=matrix'...
2007/7/16, Matthieu Brucher [EMAIL PROTECTED]:
Hi,
Did you try numpy.dot(sqrtm(a), sqrtm(a)) ?
Matthieu
2007/7/16, Kevin Jacobs [EMAIL PROTECTED] [EMAIL PROTECTED]:
Hi all,
This is a bit of a SciPy question, but I thought I'd ask here since I'm
Hi,
What version of MSVC are you using ?
If you really want to have an optimized version, don't use mingw (gcc is not
up to date).
Matthieu
2007/7/13, [EMAIL PROTECTED]
[EMAIL PROTECTED]:
Hi,
I am keen to evaluate numpy as an alternative to MATLAB for my PhD work
and possible wider use
Hi,
On Mon, 9 Jul 2007, Timothy Hochberg apparently wrote:
Why not simply use and | instead of + and *?
A couple reasons, none determinative.
1. numpy is right a Python is wrong it this case
I don't think I agree with this. Once you've decided to make Boolean a
subclass of Int, then
When you talk about algebra - one might have to restrict one self to '|'
and ''
-- not use '+' and '-'
E.g.:
True - True = False # right !?
Not exactly because - True = + True
So True - True = True + True = True
You have to stay in the algebra the whole time.
# but if:
True+True
2007/6/30, dmitrey [EMAIL PROTECTED]:
I didn't find your python prices for Star-P. Or are there any chances
for GPL/other free license for Python Star-P?
I've found them, several k€. A link would have been great (
http://www.interactivesupercomputing.com/products/starpandpython.php)
All
Hi,
Is there a comparison with parallel libraries that can be branched on numpy
like MKL ? (and IPP for random number ?)
Matthieu
2007/6/29, Ronnie Hoogerwerf [EMAIL PROTECTED]:
I am an Application Engineer at Interactive Supercomputing and we are
rolling out a beta version of our Star-P
Hi,
You have everything you need for PCA in numpy.linalg.
Matthieu
2007/6/21, Alex Torquato S. Carneiro [EMAIL PROTECTED]:
I'm doing some projects in Python (system GNU / Linux - Ubuntu 7.0) about
image processing. I'm needing a implementation of PCA, prefer to library for
apt-get.
Thanks.
Hi,
This was discussed some time ago (I started it because I had exactly the
same problem), numpy is not responsible for this, Python is. Python uses the
C standard library and in C by MS, NaN and Inf can be displayed, but not
read from a string, so this is the behaviour displayed here.
Wait for
Hi,
I've been trying to use the Intel C Compiler for some extensions, and as a
matter of fact, only numpy.distutils seems to support it... (hope that the
next version of setuptools will...)
Is it possible to change the compiler command line options in the
commandline or in a .cfg file ? For the
Maybe, maybe not. On 64bit Intel machines running 64bit linux the fedora
package raises an illegal instruction error. Since the fedora package is
based on the debian package this might be a problem on Ubuntu also. For
recent hardware you are probably better off compiling your own from the
latest
Red Hat uses Debian packages ? That sounds odd... FC uses RPM, Debian
uses deb packages. The problem with RPM is, as stated by David some
time ago, that a lot of info is missing in RPM that is present in deb.
I don't think I stated that :)
Well you said, IIRC, that you had troubles making
Hi,
1) I'd like to get it going so that we can push out an electronic issue
after the SciPy conference (in September)
That would be great for the fame of numpy and scipy :)
2) I think it's scope should be limited to papers that describe
algorithms and code that are in NumPy / SciPy /
Umm, no,
There really aren't any transparent fast fft convolutions in SciPy. The
closest thing is in signaltools, fftconvolve, and if you ask it to convolve,
say, sequences whose length add up to 7902, then it will do a size 7901
transform. Because 7901 is prime this takes about 300 times as
There really aren't any transparent fast fft convolutions in SciPy. The
closest thing is in signaltools, fftconvolve, and if you ask it to convolve,
say, sequences whose length add up to 7902, then it will do a size 7901
transform.
BTW, is this really a glitch ? I think there is two schools
I'm a moderator on a French programming forum, and in the algorithm
area, there are people that even don't know the FFT algorithm that
want to make complicated thing with it, it is bound to fail. I suppose
that indicating this limitation in the docstring is enough, so that
people make the
Sorry, not cross-correlation, convolution, too early in the morning here...
2007/5/27, Matthieu Brucher [EMAIL PROTECTED]:
Hi,
Isn't the chirp transform only two cross-correlations ? And for a fast
one, there is a module in SciPy, and I think that kind of operation belongs
more to Scipy than
Hi,
you have a problem with your Ubuntu installation, not with numpy.
Matthieu
2007/5/13, dmitrey [EMAIL PROTECTED]:
Stefan van der Walt wrote:
On Sun, May 13, 2007 at 06:19:30PM +0300, dmitrey wrote:
Is it possible somehow to speedup numpy 1.0.3 appearing in Linux update
channels? (as
Don't forget that '*' is element-wise for arrays, use dot instead ;)
Matthieu
2007/5/6, dpn [EMAIL PROTECTED]:
Hi,
i have two questions, both loosely related to SVD.
I've seen this post:
http://thread.gmane.org/gmane.comp.python.numeric.general/4575
u,s,v =
In short, I don't know that this is a bug. It is a missing feature, but
It may be hard to get someone to write the code to account for the
limited fscanf() in fromfile().
That's what I was thinking :(
Matthieu
___
Numpy-discussion mailing list
2007/4/29, Anton Sherwood [EMAIL PROTECTED]:
Anton Sherwood wrote:
I'm using eigenvectors of a graph's adjacency matrix as topological
coordinates of the graph's vertices as embedded in 3space (something I
learned about just recently). Whenever I've done this with a graph
that
*does*
Well, that's easy ;)
OK, I have to digg in for the transformations of numpy arrays, knowing that
I have other parameters. But for this, the Cookbook at scipy should help me
a lot.
Thanks for the help ;)
Matthieu
Ok, I have a simple working example. It is actually much easier than I
thought,
2007/4/20, Matthieu Brucher [EMAIL PROTECTED]:
Well, that's easy ;)
OK, I have to digg in for the transformations of numpy arrays, knowing
that I have other parameters. But for this, the Cookbook at scipy should
help me a lot.
Thanks for the help ;)
Matthieu
Some news, I finished
Hi,
I want to wrap some code I've done in the past with a custom array and pass
numpy arrays to it.
So I need to transform numpy arrays to my arrays at the construction of an
instance of my class, as well as each call to a method (pass by value).
Then, some method return by value an array I have
You should give ctypes a try, I find it much better than swig most
of the time for wrapping. You can find some doc here:
http://www.scipy.org/Cookbook/Ctypes2
Basically, once you get your dll/so with a function foo(double *a,
int n), you can call it directly in numpy by passing
This does not mean that all use of operator overloading is inherently
bad. Notably, there is a C++ numerical library called Blitz++ which can
avoid these tempraries for small fixed-size arrays. As it depends on
template metaprogramming, the size must be known at compile time. But if
this is the
I would say that if the underlying atlas library is multithreaded, numpy
operations will be as well. Then, at the Python level, even if the
operations take a lot of time, the interpreter will be able to process
threads, as the lock is freed during the numpy operations - as I understood
for the
Hi,
I followed the discussion on the scipy ML, and I would advocate it as well.
I miss the dichotomy that is present in Matlab, and to have a similar degree
of freedom, it would be good to have it in the upcoming major release of
Python.
Matthieu
2007/3/24, Travis Oliphant [EMAIL PROTECTED]:
On the scipy user list, this exac question appeared last month, so yu can
check the answers on the archive :)
Matthieu
2007/3/2, Stephen Kelly [EMAIL PROTECTED]:
Hi,
I'm working on a project that requires interpolation, and I found this
post
301 - 373 of 373 matches
Mail list logo