I'd like to have a look at the implementation of iadd in numpy,
but I'm having a real hard time to find the corresponding code.
I'm basically stuck at
https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/number.c#L487
Could someone give me a pointer where to find it?
Respectively,
Hi,
I'm using numpy 1.6.1 on Ubuntu 12.04.1 LTS.
A code that used to work with an older version of numpy now fails with an error.
Were there any changes in the way inplace operations like +=, *=, etc.
work on arrays with non-standard strides?
For the script:
--- start of code ---
impor
Hello all,
I'd like to call a Python function from a C++ code. The Python
function has numpy.ndarrays as input.
I figured that the easiest way would be to use ctypes.
However, I can't get numpy and ctypes to work together.
--- run.c
#include
#include
void run(PyArrayObje
using math.cos instead of numpy.cos should be much faster.
I believe this is a known issue of numpy.
On Thu, Nov 25, 2010 at 11:13 AM, Jean-Luc Menut wrote:
> Hello all,
>
> I have a little question about the speed of numpy vs IDL 7.0. I did a
> very simple little check by computing just a cosine
On Tue, Nov 23, 2010 at 2:50 PM, Gael Varoquaux
wrote:
> On Tue, Nov 23, 2010 at 02:47:10PM +0100, Sebastian Walter wrote:
>> Well, I don't know what the best method is to solve your problem, so
>> take the following with a grain of salt:
>> Wouldn't it be
On Tue, Nov 23, 2010 at 11:43 AM, Gael Varoquaux
wrote:
> On Tue, Nov 23, 2010 at 11:37:02AM +0100, Sebastian Walter wrote:
>> >> min_x f(x)
>> >> s.t. lo <= Ax + b <= up
>> >> 0 = g(x)
>> >> 0 <= h(x)
>
&g
On Tue, Nov 23, 2010 at 11:17 AM, Gael Varoquaux
wrote:
> On Tue, Nov 23, 2010 at 11:13:23AM +0100, Sebastian Walter wrote:
>> I'm not familiar with dichotomy optimization.
>> Several techniques have been proposed to solve the problem: genetic
>> algorithms, simulated
Hello Gael,
On Tue, Nov 23, 2010 at 10:27 AM, Gael Varoquaux
wrote:
> On Tue, Nov 23, 2010 at 10:18:50AM +0100, Matthieu Brucher wrote:
>> > The problem is that I can't tell the Nelder-Mead that the smallest jump
>> > it should attempt is .5. I can set xtol to .5, but it still attemps jumps
>> >
hmm, I have just realized that I forgot to upload the new version to pypi:
it is now available on
http://pypi.python.org/pypi/algopy
On Thu, Oct 28, 2010 at 10:47 AM, Sebastian Walter
wrote:
> On Wed, Oct 27, 2010 at 10:50 PM, Nicolai Heitz wrote:
>> m 27.10.2010 02:02, schrieb
On Wed, Oct 27, 2010 at 10:50 PM, Nicolai Heitz wrote:
> m 27.10.2010 02:02, schrieb Sebastian Walter:
>
>> On Wed, Oct 27, 2010 at 12:59 AM, Pauli Virtanen wrote:
>>> Tue, 26 Oct 2010 14:24:39 -0700, Nicolai Heitz wrote:
>>>>> http://mail.scipy.org
On Wed, Oct 27, 2010 at 12:59 AM, Pauli Virtanen wrote:
> Tue, 26 Oct 2010 14:24:39 -0700, Nicolai Heitz wrote:
>> > http://mail.scipy.org/mailman/listinfo/scipy-user
>>
>> I contacted them already but they didn't responded so far and I was
>> forwarded to that list which was supposed to be more
Hello Friedrich,
I have read your proposal. You describe issues that I have also
encountered several times.
I believe that your priops approach would be an improvement over the
current overloading of binary operators.
That being said, I think the issue is not so much numpy but rather the
way Pytho
is it really the covariance matrix you want to invert? Or do you want
to compute something like
x^T C^{-1} x,
where x is an array of size N and C an array of size (N,N)?
It would also be interesting to know how the covariance matrix gets computed
and what its condition number is, at least approxim
,6)))
In [15]: y = x*z
In [16]: z.shape
Out[16]: (4, 1, 6)
In [17]: y.shape
Out[17]: (2, 3, 4, 5, 6)
Sebastian
>
> John
>
> On Sun, Aug 1, 2010 at 5:05 AM, Sebastian Walter
> wrote:
>>
>> I'm happy to announce the first official release of ALGOPY in version
&
I'm happy to announce the first official release of ALGOPY in version 0.2.1.
Rationale:
The purpose of ALGOPY is the evaluation of higher-order derivatives in
the forward and reverse mode of Algorithmic Differentiation (AD) using
univariate Taylor polynomial arithmetic. Particular focus a
On Sun, Jun 13, 2010 at 8:11 PM, Alan Bromborsky wrote:
> Friedrich Romstedt wrote:
>> 2010/6/13 Pauli Virtanen :
>>
>>> def tensor_contraction_single(tensor, dimensions):
>>> """Perform a single tensor contraction over the dimensions given"""
>>> swap = [x for x in range(tensor.ndim)
>>>
On Sat, Jun 12, 2010 at 3:57 PM, David Cournapeau wrote:
> On Sat, Jun 12, 2010 at 10:27 PM, Sebastian Walter
> wrote:
>> On Thu, Jun 10, 2010 at 6:48 PM, Sturla Molden wrote:
>>>
>>> I have a few radical suggestions:
>>>
>>> 1. Use ctypes as g
On Thu, Jun 10, 2010 at 6:48 PM, Sturla Molden wrote:
>
> I have a few radical suggestions:
>
> 1. Use ctypes as glue to the core DLL, so we can completely forget about
> refcounts and similar mess. Why put manual reference counting and error
> handling in the core? It's stupid.
I totally agree,
On Wed, May 26, 2010 at 12:31 PM, Pauli Virtanen wrote:
> Wed, 26 May 2010 10:50:19 +0200, Sebastian Walter wrote:
>> I'm a potential user of the C-API and therefore I'm very interested in
>> the outcome.
>> In the previous discussio
I'm a potential user of the C-API and therefore I'm very interested in
the outcome.
In the previous discussion
(http://comments.gmane.org/gmane.comp.python.numeric.general/37409)
many different views on what the new C-API "should" be were expressed.
Naturally, I wonder if the new C-API will be use
playing devil's advocate I'd say use Algorithmic Differentiation
instead of finite differences ;)
that would probably speed things up quite a lot.
On Tue, May 4, 2010 at 11:36 PM, Davide Lasagna wrote:
> If your x data are equispaced I would do something like this
> def derive( func, x):
> """
On Tue, Apr 13, 2010 at 12:29 AM, Charles R Harris
wrote:
>
>
> On Mon, Apr 12, 2010 at 4:19 PM, Travis Oliphant
> wrote:
>>
>> On Apr 11, 2010, at 4:17 PM, Sebastian Walter wrote:
>>
>> >
>> > Ermm, the reply above is quite poor, sorry about
On Sun, Apr 11, 2010 at 12:59 PM, Sebastian Walter
wrote:
> On Tue, Apr 6, 2010 at 9:16 PM, Travis Oliphant
> wrote:
>>
>> On Apr 6, 2010, at 9:08 AM, David Cournapeau wrote:
>>
>> Hi Travis,
>>
>> On Tue, Apr 6, 2010 at 7:43 AM, Travis Oliphant
>&
On Tue, Apr 6, 2010 at 9:16 PM, Travis Oliphant wrote:
>
> On Apr 6, 2010, at 9:08 AM, David Cournapeau wrote:
>
> Hi Travis,
>
> On Tue, Apr 6, 2010 at 7:43 AM, Travis Oliphant
> wrote:
>
>
> I should have some time over the next couple of weeks, and I am very
>
> interested in refactoring the N
On Fri, Mar 19, 2010 at 11:18 PM, David Warde-Farley
wrote:
> On 19-Mar-10, at 1:13 PM, Anne Archibald wrote:
>
>> I'm not knocking numpy; it does (almost) the best it can. (I'm not
>> sure of the optimality of the order in which ufuncs are executed; I
>> think some optimizations there are possib
On Thu, Mar 18, 2010 at 7:01 AM, Frank Horowitz wrote:
> Dear All,
>
> I'm working on a piece of optimisation code where it turns out to be
> mathematically convenient to have a matrix where a few pre-chosen elements
> must be computed at evaluation time for the dot product (i.e. matrix
> multi
On Sun, Feb 28, 2010 at 9:06 PM, Friedrich Romstedt
wrote:
> 2010/2/28 Sebastian Walter :
>>> I think I can use that to make my upy accept arbitrary functions, but
>>> how do you apply sin() to a TTP?
>>
>> perform truncated Taylor expansion of [y]_D = sin([x
On Sun, Feb 28, 2010 at 12:47 AM, Friedrich Romstedt
wrote:
> Sebastian, and, please, be not offended by what I wrote. I regret a
> bit my jokes ... It's simply too late at night.
no offense taken
>
> Friedrich
> ___
> NumPy-Discussion mailing list
> Nu
On Sun, Feb 28, 2010 at 12:30 AM, Friedrich Romstedt
wrote:
> 2010/2/27 Sebastian Walter :
>> I'm sorry this comment turns out to be confusing.
>
> Maybe it's not important.
>
>> It has apparently quite the contrary effect of what I wanted to achieve:
>> Si
On Sat, Feb 27, 2010 at 11:11 PM, Friedrich Romstedt
wrote:
> Ok, it took me about one hour, but here they are: Fourier-accelerated
> polynomials.
that's the spirit! ;)
>
>> python
> Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] on win32
> Type "help", "copyright", "credi
On Sat, Feb 27, 2010 at 10:02 PM, Friedrich Romstedt
wrote:
> To the core developers (of numpy.polynomial e.g.): Skip the mess and
> read the last paragraph.
>
> The other things I will post back to the list, where they belong to.
> I just didn't want to have off-topic discussion there.
>
>> I wan
Announcement:
---
I have started to implement vectorized univariate truncated Taylor
polynomial operations (add,sub,mul,div,sin,exp,...) in ANSI-C.
The interface to python is implemented by using numpy.ndarray's ctypes
functionality. Unit tests are implement using nose.
It is B
numpy.linalg.eig guarantees to return right eigenvectors.
evec is not necessarily an orthonormal matrix when there are
eigenvalues with multiplicity >1.
For symmetrical matrices you'll have mutually orthogonal eigenspaces
but each eigenspace might be spanned by
vectors that are not orthogonal to ea
0.60908423 0.79772095]
x2= [1.0 2.0 3.0]
z2=x2*y2 [0.493770370747 1.21816846399 2.39316283707]
y2*=x2 [ 0.49377037 1.21816846 2.39316284]
end output -
On Tue, Jan 12, 2010 at 7:38 PM, Robert Kern wrote:
> On Tue, Jan 12, 2010 at 12:31, Sebastian Walter
> wrot
On Tue, Jan 12, 2010 at 7:38 PM, Robert Kern wrote:
> On Tue, Jan 12, 2010 at 12:31, Sebastian Walter
> wrote:
>> On Tue, Jan 12, 2010 at 7:09 PM, Robert Kern wrote:
>>> On Tue, Jan 12, 2010 at 12:05, Sebastian Walter
>>> wrote:
>>>> Hello,
>>>
On Tue, Jan 12, 2010 at 7:09 PM, Robert Kern wrote:
> On Tue, Jan 12, 2010 at 12:05, Sebastian Walter
> wrote:
>> Hello,
>> I have a question about the augmented assignment statements *=, +=, etc.
>> Apparently, the casting of types is not working correctly. Is this
Hello,
I have a question about the augmented assignment statements *=, +=, etc.
Apparently, the casting of types is not working correctly. Is this
known resp. intended behavior of numpy?
(I'm using numpy.__version__ = '1.4.0.dev7039' on this machine but I
remember a recent checkout of numpy yielded
...
where each function has a preprocessing part and a post processing part.
After the preprocessing call the original ufuncs on the base class
object, e.g. __add__
Sebastian
On Mon, Oct 19, 2009 at 1:55 PM, Darren Dale wrote:
> On Mon, Oct 19, 2009 at 3:10 AM, Sebastian Walter
> wrote:
&g
On Tue, Oct 20, 2009 at 5:45 AM, Anne Archibald
wrote:
> 2009/10/19 Sebastian Walter :
>>
>> I'm all for generic (u)funcs since they might come handy for me since
>> I'm doing lots of operation on arrays of polynomials.
>
> Just as a side note, if yo
On Sat, Oct 17, 2009 at 2:49 PM, Darren Dale wrote:
> numpy's functions, especially ufuncs, have had some ability to support
> subclasses through the ndarray.__array_wrap__ method, which provides
> masked arrays or quantities (for example) with an opportunity to set
> the class and metadata of the
On Mon, Oct 5, 2009 at 4:52 PM, wrote:
> On Mon, Oct 5, 2009 at 5:37 AM, Sebastian Walter
> wrote:
>> On Fri, Oct 2, 2009 at 10:40 PM, wrote:
>>> On Fri, Oct 2, 2009 at 3:38 PM, Charles R Harris
>>> wrote:
>>>>
>>>>
>>>>
On Fri, Oct 2, 2009 at 10:40 PM, wrote:
> On Fri, Oct 2, 2009 at 3:38 PM, Charles R Harris
> wrote:
>>
>>
>> On Fri, Oct 2, 2009 at 12:30 PM, wrote:
>>>
>>> On Fri, Oct 2, 2009 at 2:09 PM, Charles R Harris
>>> wrote:
>>> >
>>> >
>>> > On Fri, Oct 2, 2009 at 11:35 AM, Charles R Harris
>>> > wr
sorry if this a duplicate, it seems that my last mail got lost...
is there something to take care about when sending a mail to the numpy
mailing list?
On Tue, Sep 22, 2009 at 9:42 AM, Sebastian Walter
wrote:
> This is somewhat similar to the question about fixed-point arithmetic
> earl
This is somewhat similar to the question about fixed-point arithmetic
earlier on this mailing list.
I need to do computations on arrays whose elements are truncated polynomials.
At the momement, I have implemented the univariate truncated
polynomials as objects of a class UTPS.
The class basicall
In [45]: x[: -0 or None]
Out[45]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In [46]: x[: -1 or None]
Out[46]: array([0, 1, 2, 3, 4, 5, 6, 7, 8])
works fine without slice()
On Wed, Aug 19, 2009 at 2:54 PM, Citi, Luca wrote:
> Another solution (elegant?? readable??) :
>>> x[slice(-n or None)] # with n
I'm sure there is a better solution:
In [1]: x = numpy.array([i for i in range(10)])
In [2]: foo = lambda n: -n if n!=0 else None
:
In [3]: x[:foo(1)]
Out[3]: array([0, 1, 2, 3, 4, 5, 6, 7, 8])
In [4]: x[:foo(0)]
Out[4]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
On Wed, Aug 19, 2009
th Goodman wrote:
>>
>> On Tue, Jun 2, 2009 at 1:42 AM, Sebastian Walter
>> wrote:
>> > Hello,
>> > Multiplying a Python float to a numpy.array of objects works flawlessly
>> > but not with a numpy.float64 .
>> > I tried numpy version '1.0.
On Fri, Jun 5, 2009 at 11:58 AM, David
Cournapeau wrote:
> Sebastian Walter wrote:
>> On Thu, Jun 4, 2009 at 10:56 PM, Chris Colbert wrote:
>>
>>> I should update after reading the thread Sebastian linked:
>>>
>>> The current 1.3 version of numpy (don
On Thu, Jun 4, 2009 at 10:56 PM, Chris Colbert wrote:
> I should update after reading the thread Sebastian linked:
>
> The current 1.3 version of numpy (don't know about previous versions) uses
> the optimized Atlas BLAS routines for numpy.dot() if numpy was compiled with
> these libraries. I've ve
Have a look at this thread:
http://www.mail-archive.com/numpy-discussion@scipy.org/msg13085.html
The speed difference is probably due to the fact that the matrix
multiplication does not call optimized an optimized blas routine, e.g.
the ATLAS blas.
Sebastian
On Thu, Jun 4, 2009 at 3:36 PM, Davi
On Tue, Jun 2, 2009 at 4:18 PM, Darren Dale wrote:
>
>
> On Tue, Jun 2, 2009 at 10:09 AM, Keith Goodman wrote:
>>
>> On Tue, Jun 2, 2009 at 1:42 AM, Sebastian Walter
>> wrote:
>> > Hello,
>> > Multiplying a Python float to a numpy.array of
Hello,
Multiplying a Python float to a numpy.array of objects works flawlessly
but not with a numpy.float64 .
I tried numpy version '1.0.4' on a 32 bit Linux and '1.2.1' on a 64
bit Linux: both raise the same exception.
Is this a (known) bug?
-- test.py -
I'd be interested to see the benchmark ;)
On Thu, May 28, 2009 at 4:14 PM, Nicolas Rougier
wrote:
>
> Hi,
>
> I'm now testing dot product and using the following:
>
> import numpy as np, scipy.sparse as sp
>
> A = np.matrix(np.zeros((5,10)))
> B = np.zeros((10,1))
> print (A*B).shape
> print np
2009/5/18 Stéfan van der Walt :
> 2009/5/18 Sebastian Walter :
>> B = numpy.dot(A.T, A)
>
> This multiplication should be avoided whenever possible -- you are
> effectively squaring your condition number.
Indeed.
>
> In the case where you have more rows than columns,
Alternatively, to solve A x = b you could do
import numpy
import numpy.linalg
B = numpy.dot(A.T, A)
c = numpy.dot(A.T, b)
x = numpy.linalg(B,c)
This is not the most efficient way to do it but at least you know
exactly what's going on in your code.
On Sun, May 17, 2009 at 7:21 PM, wrote:
> O
On Wed, May 13, 2009 at 10:18 PM, David J Strozzi wrote:
> Hi,
>
> [You may want to edit the numpy homepage numpy.scipy.org to tell
> people they must subscribe to post, and adding a link to
> http://www.scipy.org/Mailing_Lists]
>
>
> Many of you probably know of the interpreter yorick by Dave Mun
hi mathew,
1) what does it mean if a value is None? I.e., what is larger: None or 3?
Then first thing I would do is convert the None to a number.
2) Are your arrays integer arrays or double arrays?
It's much easier if they are doubles because then you could use
standard methods for NLP problems,
I tried looking at your question but ... kind of unusable without some
documentation.
You need to give at least the following information:
what kind of optimization problem?
LP,NLP, Mixed Integer LP, Stochastic, semiinfinite, semidefinite?
Most solvers require the problem in the following form
+1 to that
Often, one is only interested in the largest or smallest
eigenvalues/vectors of a problem. Then the method of choice are
iterative solvers, e.g. Lanczos algorithm.
If only the largest eigenvalue/vector is needed, you could try the
power iteration.
On Wed, Apr 29, 2009 at 7:49 AM, Zh
I'm looking for a library that has linear algebra routines on
objects of a self-written class (with overloaded operators +,-,*,/).
An example would be truncated Taylor polynomials.
One can perform all operations +,-,*,/ on truncated Taylor polynomials.
Often it is also necessary to perform linear
There are several possibilities, some of them are listed on
http://en.wikipedia.org/wiki/Automatic_differentiation
== pycppad
http://www.seanet.com/~bradbell/pycppad/index.xml
pycppad is a wrapper of the C++ library CppAD ( http://www.coin-or.org/CppAD/ )
the wrapper can do up to second order de
On Sun, Feb 1, 2009 at 12:24 AM, Robert Kern wrote:
> On Sat, Jan 31, 2009 at 10:30, Sebastian Walter
> wrote:
>> Wouldn't it be nice to have numpy a little more generic?
>> All that would be needed was a little check of the arguments.
>>
>> If I do:
>> n
Wouldn't it be nice to have numpy a little more generic?
All that would be needed was a little check of the arguments.
If I do:
numpy.trace(4)
shouldn't numpy be smart enough to regard the 4 as a 1x1 array?
numpy.sin(4) works!
and if
x = my_class(4)
wouldn't it be nice if
numpy.trace(x)
would c
Hey,
What is the best solution to get this code working?
Anyone a good idea?
-- test.py ---
import numpy
import numpy.linalg
class afloat:
def __init__(self,x):
self.x = x
def __add__(self,rhs):
64 matches
Mail list logo