I'd like to have a look at the implementation of iadd in numpy,
but I'm having a real hard time to find the corresponding code.
I'm basically stuck at
https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/number.c#L487
Could someone give me a pointer where to find it?
Hi,
I'm using numpy 1.6.1 on Ubuntu 12.04.1 LTS.
A code that used to work with an older version of numpy now fails with an error.
Were there any changes in the way inplace operations like +=, *=, etc.
work on arrays with non-standard strides?
For the script:
--- start of code ---
Hello all,
I'd like to call a Python function from a C++ code. The Python
function has numpy.ndarrays as input.
I figured that the easiest way would be to use ctypes.
However, I can't get numpy and ctypes to work together.
--- run.c
#include Python.h
#include
using math.cos instead of numpy.cos should be much faster.
I believe this is a known issue of numpy.
On Thu, Nov 25, 2010 at 11:13 AM, Jean-Luc Menut jeanluc.me...@free.fr wrote:
Hello all,
I have a little question about the speed of numpy vs IDL 7.0. I did a
very simple little check by
Hello Gael,
On Tue, Nov 23, 2010 at 10:27 AM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Tue, Nov 23, 2010 at 10:18:50AM +0100, Matthieu Brucher wrote:
The problem is that I can't tell the Nelder-Mead that the smallest jump
it should attempt is .5. I can set xtol to .5, but it
On Tue, Nov 23, 2010 at 11:17 AM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Tue, Nov 23, 2010 at 11:13:23AM +0100, Sebastian Walter wrote:
I'm not familiar with dichotomy optimization.
Several techniques have been proposed to solve the problem: genetic
algorithms, simulated
On Tue, Nov 23, 2010 at 11:43 AM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Tue, Nov 23, 2010 at 11:37:02AM +0100, Sebastian Walter wrote:
min_x f(x)
s.t. lo = Ax + b = up
0 = g(x)
0 = h(x)
No constraints.
didn't you say that you operate only
On Tue, Nov 23, 2010 at 2:50 PM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Tue, Nov 23, 2010 at 02:47:10PM +0100, Sebastian Walter wrote:
Well, I don't know what the best method is to solve your problem, so
take the following with a grain of salt:
Wouldn't it be better to change
On Wed, Oct 27, 2010 at 10:50 PM, Nicolai Heitz nicolaihe...@gmx.de wrote:
m 27.10.2010 02:02, schrieb Sebastian Walter:
On Wed, Oct 27, 2010 at 12:59 AM, Pauli Virtanenp...@iki.fi wrote:
Tue, 26 Oct 2010 14:24:39 -0700, Nicolai Heitz wrote:
http://mail.scipy.org/mailman/listinfo/scipy
hmm, I have just realized that I forgot to upload the new version to pypi:
it is now available on
http://pypi.python.org/pypi/algopy
On Thu, Oct 28, 2010 at 10:47 AM, Sebastian Walter
sebastian.wal...@gmail.com wrote:
On Wed, Oct 27, 2010 at 10:50 PM, Nicolai Heitz nicolaihe...@gmx.de wrote:
m
On Wed, Oct 27, 2010 at 12:59 AM, Pauli Virtanen p...@iki.fi wrote:
Tue, 26 Oct 2010 14:24:39 -0700, Nicolai Heitz wrote:
http://mail.scipy.org/mailman/listinfo/scipy-user
I contacted them already but they didn't responded so far and I was
forwarded to that list which was supposed to be
Hello Friedrich,
I have read your proposal. You describe issues that I have also
encountered several times.
I believe that your priops approach would be an improvement over the
current overloading of binary operators.
That being said, I think the issue is not so much numpy but rather the
way
is it really the covariance matrix you want to invert? Or do you want
to compute something like
x^T C^{-1} x,
where x is an array of size N and C an array of size (N,N)?
It would also be interesting to know how the covariance matrix gets computed
and what its condition number is, at least
)))
In [15]: y = x*z
In [16]: z.shape
Out[16]: (4, 1, 6)
In [17]: y.shape
Out[17]: (2, 3, 4, 5, 6)
Sebastian
John
On Sun, Aug 1, 2010 at 5:05 AM, Sebastian Walter
sebastian.wal...@gmail.com wrote:
I'm happy to announce the first official release of ALGOPY in version
0.2.1.
Rationale
I'm happy to announce the first official release of ALGOPY in version 0.2.1.
Rationale:
The purpose of ALGOPY is the evaluation of higher-order derivatives in
the forward and reverse mode of Algorithmic Differentiation (AD) using
univariate Taylor polynomial arithmetic. Particular focus
On Sun, Jun 13, 2010 at 8:11 PM, Alan Bromborsky abro...@verizon.net wrote:
Friedrich Romstedt wrote:
2010/6/13 Pauli Virtanen p...@iki.fi:
def tensor_contraction_single(tensor, dimensions):
Perform a single tensor contraction over the dimensions given
swap = [x for x in
On Thu, Jun 10, 2010 at 6:48 PM, Sturla Molden stu...@molden.no wrote:
I have a few radical suggestions:
1. Use ctypes as glue to the core DLL, so we can completely forget about
refcounts and similar mess. Why put manual reference counting and error
handling in the core? It's stupid.
I
On Sat, Jun 12, 2010 at 3:57 PM, David Cournapeau courn...@gmail.com wrote:
On Sat, Jun 12, 2010 at 10:27 PM, Sebastian Walter
sebastian.wal...@gmail.com wrote:
On Thu, Jun 10, 2010 at 6:48 PM, Sturla Molden stu...@molden.no wrote:
I have a few radical suggestions:
1. Use ctypes as glue
I'm a potential user of the C-API and therefore I'm very interested in
the outcome.
In the previous discussion
(http://comments.gmane.org/gmane.comp.python.numeric.general/37409)
many different views on what the new C-API should be were expressed.
Naturally, I wonder if the new C-API will be
On Wed, May 26, 2010 at 12:31 PM, Pauli Virtanen p...@iki.fi wrote:
Wed, 26 May 2010 10:50:19 +0200, Sebastian Walter wrote:
I'm a potential user of the C-API and therefore I'm very interested in
the outcome.
In the previous discussion
(http://comments.gmane.org
playing devil's advocate I'd say use Algorithmic Differentiation
instead of finite differences ;)
that would probably speed things up quite a lot.
On Tue, May 4, 2010 at 11:36 PM, Davide Lasagna lasagnadav...@gmail.com wrote:
If your x data are equispaced I would do something like this
def
On Tue, Apr 13, 2010 at 12:29 AM, Charles R Harris
charlesr.har...@gmail.com wrote:
On Mon, Apr 12, 2010 at 4:19 PM, Travis Oliphant oliph...@enthought.com
wrote:
On Apr 11, 2010, at 4:17 PM, Sebastian Walter wrote:
Ermm, the reply above is quite poor, sorry about that.
What I meant
On Tue, Apr 6, 2010 at 9:16 PM, Travis Oliphant oliph...@enthought.com wrote:
On Apr 6, 2010, at 9:08 AM, David Cournapeau wrote:
Hi Travis,
On Tue, Apr 6, 2010 at 7:43 AM, Travis Oliphant oliph...@enthought.com
wrote:
I should have some time over the next couple of weeks, and I am very
On Sun, Apr 11, 2010 at 12:59 PM, Sebastian Walter
sebastian.wal...@gmail.com wrote:
On Tue, Apr 6, 2010 at 9:16 PM, Travis Oliphant oliph...@enthought.com
wrote:
On Apr 6, 2010, at 9:08 AM, David Cournapeau wrote:
Hi Travis,
On Tue, Apr 6, 2010 at 7:43 AM, Travis Oliphant oliph
On Fri, Mar 19, 2010 at 11:18 PM, David Warde-Farley d...@cs.toronto.edu
wrote:
On 19-Mar-10, at 1:13 PM, Anne Archibald wrote:
I'm not knocking numpy; it does (almost) the best it can. (I'm not
sure of the optimality of the order in which ufuncs are executed; I
think some optimizations
On Thu, Mar 18, 2010 at 7:01 AM, Frank Horowitz fr...@horow.net wrote:
Dear All,
I'm working on a piece of optimisation code where it turns out to be
mathematically convenient to have a matrix where a few pre-chosen elements
must be computed at evaluation time for the dot product (i.e.
On Sun, Feb 28, 2010 at 12:30 AM, Friedrich Romstedt
friedrichromst...@gmail.com wrote:
2010/2/27 Sebastian Walter sebastian.wal...@gmail.com:
I'm sorry this comment turns out to be confusing.
Maybe it's not important.
It has apparently quite the contrary effect of what I wanted to achieve
On Sun, Feb 28, 2010 at 12:47 AM, Friedrich Romstedt
friedrichromst...@gmail.com wrote:
Sebastian, and, please, be not offended by what I wrote. I regret a
bit my jokes ... It's simply too late at night.
no offense taken
Friedrich
___
On Sun, Feb 28, 2010 at 9:06 PM, Friedrich Romstedt
friedrichromst...@gmail.com wrote:
2010/2/28 Sebastian Walter sebastian.wal...@gmail.com:
I think I can use that to make my upy accept arbitrary functions, but
how do you apply sin() to a TTP?
perform truncated Taylor expansion of [y]_D
Announcement:
---
I have started to implement vectorized univariate truncated Taylor
polynomial operations (add,sub,mul,div,sin,exp,...) in ANSI-C.
The interface to python is implemented by using numpy.ndarray's ctypes
functionality. Unit tests are implement using nose.
It is
On Sat, Feb 27, 2010 at 10:02 PM, Friedrich Romstedt
friedrichromst...@gmail.com wrote:
To the core developers (of numpy.polynomial e.g.): Skip the mess and
read the last paragraph.
The other things I will post back to the list, where they belong to.
I just didn't want to have off-topic
On Sat, Feb 27, 2010 at 11:11 PM, Friedrich Romstedt
friedrichromst...@gmail.com wrote:
Ok, it took me about one hour, but here they are: Fourier-accelerated
polynomials.
that's the spirit! ;)
python
Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] on win32
Type help,
numpy.linalg.eig guarantees to return right eigenvectors.
evec is not necessarily an orthonormal matrix when there are
eigenvalues with multiplicity 1.
For symmetrical matrices you'll have mutually orthogonal eigenspaces
but each eigenspace might be spanned by
vectors that are not orthogonal to
2.39316284]
end output -
On Tue, Jan 12, 2010 at 7:38 PM, Robert Kern robert.k...@gmail.com wrote:
On Tue, Jan 12, 2010 at 12:31, Sebastian Walter
sebastian.wal...@gmail.com wrote:
On Tue, Jan 12, 2010 at 7:09 PM, Robert Kern robert.k...@gmail.com wrote:
On Tue, Jan 12, 2010
Hello,
I have a question about the augmented assignment statements *=, +=, etc.
Apparently, the casting of types is not working correctly. Is this
known resp. intended behavior of numpy?
(I'm using numpy.__version__ = '1.4.0.dev7039' on this machine but I
remember a recent checkout of numpy
On Tue, Jan 12, 2010 at 7:09 PM, Robert Kern robert.k...@gmail.com wrote:
On Tue, Jan 12, 2010 at 12:05, Sebastian Walter
sebastian.wal...@gmail.com wrote:
Hello,
I have a question about the augmented assignment statements *=, +=, etc.
Apparently, the casting of types is not working correctly
On Tue, Jan 12, 2010 at 7:38 PM, Robert Kern robert.k...@gmail.com wrote:
On Tue, Jan 12, 2010 at 12:31, Sebastian Walter
sebastian.wal...@gmail.com wrote:
On Tue, Jan 12, 2010 at 7:09 PM, Robert Kern robert.k...@gmail.com wrote:
On Tue, Jan 12, 2010 at 12:05, Sebastian Walter
sebastian.wal
On Tue, Oct 20, 2009 at 5:45 AM, Anne Archibald
peridot.face...@gmail.com wrote:
2009/10/19 Sebastian Walter sebastian.wal...@gmail.com:
I'm all for generic (u)funcs since they might come handy for me since
I'm doing lots of operation on arrays of polynomials.
Just as a side note, if you
function has a preprocessing part and a post processing part.
After the preprocessing call the original ufuncs on the base class
object, e.g. __add__
Sebastian
On Mon, Oct 19, 2009 at 1:55 PM, Darren Dale dsdal...@gmail.com wrote:
On Mon, Oct 19, 2009 at 3:10 AM, Sebastian Walter
sebastian.wal
On Sat, Oct 17, 2009 at 2:49 PM, Darren Dale dsdal...@gmail.com wrote:
numpy's functions, especially ufuncs, have had some ability to support
subclasses through the ndarray.__array_wrap__ method, which provides
masked arrays or quantities (for example) with an opportunity to set
the class and
On Fri, Oct 2, 2009 at 10:40 PM, josef.p...@gmail.com wrote:
On Fri, Oct 2, 2009 at 3:38 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
On Fri, Oct 2, 2009 at 12:30 PM, josef.p...@gmail.com wrote:
On Fri, Oct 2, 2009 at 2:09 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
This is somewhat similar to the question about fixed-point arithmetic
earlier on this mailing list.
I need to do computations on arrays whose elements are truncated polynomials.
At the momement, I have implemented the univariate truncated
polynomials as objects of a class UTPS.
The class
sorry if this a duplicate, it seems that my last mail got lost...
is there something to take care about when sending a mail to the numpy
mailing list?
On Tue, Sep 22, 2009 at 9:42 AM, Sebastian Walter
sebastian.wal...@gmail.com wrote:
This is somewhat similar to the question about fixed-point
I'm sure there is a better solution:
In [1]: x = numpy.array([i for i in range(10)])
In [2]: foo = lambda n: -n if n!=0 else None
:
In [3]: x[:foo(1)]
Out[3]: array([0, 1, 2, 3, 4, 5, 6, 7, 8])
In [4]: x[:foo(0)]
Out[4]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
On Wed, Aug 19, 2009
In [45]: x[: -0 or None]
Out[45]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In [46]: x[: -1 or None]
Out[46]: array([0, 1, 2, 3, 4, 5, 6, 7, 8])
works fine without slice()
On Wed, Aug 19, 2009 at 2:54 PM, Citi, Lucalc...@essex.ac.uk wrote:
Another solution (elegant?? readable??) :
x[slice(-n or
wrote:
On Tue, Jun 2, 2009 at 1:42 AM, Sebastian Walter
sebastian.wal...@gmail.com wrote:
Hello,
Multiplying a Python float to a numpy.array of objects works flawlessly
but not with a numpy.float64 .
I tried numpy version '1.0.4' on a 32 bit Linux and '1.2.1' on a 64
bit Linux: both
On Thu, Jun 4, 2009 at 10:56 PM, Chris Colbertsccolb...@gmail.com wrote:
I should update after reading the thread Sebastian linked:
The current 1.3 version of numpy (don't know about previous versions) uses
the optimized Atlas BLAS routines for numpy.dot() if numpy was compiled with
these
On Fri, Jun 5, 2009 at 11:58 AM, David
Cournapeauda...@ar.media.kyoto-u.ac.jp wrote:
Sebastian Walter wrote:
On Thu, Jun 4, 2009 at 10:56 PM, Chris Colbertsccolb...@gmail.com wrote:
I should update after reading the thread Sebastian linked:
The current 1.3 version of numpy (don't know about
Have a look at this thread:
http://www.mail-archive.com/numpy-discussion@scipy.org/msg13085.html
The speed difference is probably due to the fact that the matrix
multiplication does not call optimized an optimized blas routine, e.g.
the ATLAS blas.
Sebastian
On Thu, Jun 4, 2009 at 3:36 PM,
Hello,
Multiplying a Python float to a numpy.array of objects works flawlessly
but not with a numpy.float64 .
I tried numpy version '1.0.4' on a 32 bit Linux and '1.2.1' on a 64
bit Linux: both raise the same exception.
Is this a (known) bug?
-- test.py
On Tue, Jun 2, 2009 at 4:18 PM, Darren Dale dsdal...@gmail.com wrote:
On Tue, Jun 2, 2009 at 10:09 AM, Keith Goodman kwgood...@gmail.com wrote:
On Tue, Jun 2, 2009 at 1:42 AM, Sebastian Walter
sebastian.wal...@gmail.com wrote:
Hello,
Multiplying a Python float to a numpy.array of objects
I'd be interested to see the benchmark ;)
On Thu, May 28, 2009 at 4:14 PM, Nicolas Rougier
nicolas.roug...@loria.fr wrote:
Hi,
I'm now testing dot product and using the following:
import numpy as np, scipy.sparse as sp
A = np.matrix(np.zeros((5,10)))
B = np.zeros((10,1))
print
Alternatively, to solve A x = b you could do
import numpy
import numpy.linalg
B = numpy.dot(A.T, A)
c = numpy.dot(A.T, b)
x = numpy.linalg(B,c)
This is not the most efficient way to do it but at least you know
exactly what's going on in your code.
On Sun, May 17, 2009 at 7:21 PM,
2009/5/18 Stéfan van der Walt ste...@sun.ac.za:
2009/5/18 Sebastian Walter sebastian.wal...@gmail.com:
B = numpy.dot(A.T, A)
This multiplication should be avoided whenever possible -- you are
effectively squaring your condition number.
Indeed.
In the case where you have more rows than
On Wed, May 13, 2009 at 10:18 PM, David J Strozzi stroz...@llnl.gov wrote:
Hi,
[You may want to edit the numpy homepage numpy.scipy.org to tell
people they must subscribe to post, and adding a link to
http://www.scipy.org/Mailing_Lists]
Many of you probably know of the interpreter yorick
hi mathew,
1) what does it mean if a value is None? I.e., what is larger: None or 3?
Then first thing I would do is convert the None to a number.
2) Are your arrays integer arrays or double arrays?
It's much easier if they are doubles because then you could use
standard methods for NLP problems,
I tried looking at your question but ... kind of unusable without some
documentation.
You need to give at least the following information:
what kind of optimization problem?
LP,NLP, Mixed Integer LP, Stochastic, semiinfinite, semidefinite?
Most solvers require the problem in the following form
+1 to that
Often, one is only interested in the largest or smallest
eigenvalues/vectors of a problem. Then the method of choice are
iterative solvers, e.g. Lanczos algorithm.
If only the largest eigenvalue/vector is needed, you could try the
power iteration.
On Wed, Apr 29, 2009 at 7:49 AM,
I'm looking for a library that has linear algebra routines on
objects of a self-written class (with overloaded operators +,-,*,/).
An example would be truncated Taylor polynomials.
One can perform all operations +,-,*,/ on truncated Taylor polynomials.
Often it is also necessary to perform linear
There are several possibilities, some of them are listed on
http://en.wikipedia.org/wiki/Automatic_differentiation
== pycppad
http://www.seanet.com/~bradbell/pycppad/index.xml
pycppad is a wrapper of the C++ library CppAD ( http://www.coin-or.org/CppAD/ )
the wrapper can do up to second order
On Sun, Feb 1, 2009 at 12:24 AM, Robert Kern robert.k...@gmail.com wrote:
On Sat, Jan 31, 2009 at 10:30, Sebastian Walter
sebastian.wal...@gmail.com wrote:
Wouldn't it be nice to have numpy a little more generic?
All that would be needed was a little check of the arguments.
If I do
Wouldn't it be nice to have numpy a little more generic?
All that would be needed was a little check of the arguments.
If I do:
numpy.trace(4)
shouldn't numpy be smart enough to regard the 4 as a 1x1 array?
numpy.sin(4) works!
and if
x = my_class(4)
wouldn't it be nice if
numpy.trace(x)
would
Hey,
What is the best solution to get this code working?
Anyone a good idea?
-- test.py ---
import numpy
import numpy.linalg
class afloat:
def __init__(self,x):
self.x = x
def __add__(self,rhs):
63 matches
Mail list logo