Re: [Numpy-discussion] how is y += x computed when y.strides = (0, 8) and x.strides=(16, 8) ?

2012-10-17 Thread Sebastian Walter
I'd like to have a look at the implementation of iadd in numpy, but I'm having a real hard time to find the corresponding code. I'm basically stuck at https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/number.c#L487 Could someone give me a pointer where to find it? Respectively,

[Numpy-discussion] how is y += x computed when y.strides = (0, 8) and x.strides=(16, 8) ?

2012-08-31 Thread Sebastian Walter
Hi, I'm using numpy 1.6.1 on Ubuntu 12.04.1 LTS. A code that used to work with an older version of numpy now fails with an error. Were there any changes in the way inplace operations like +=, *=, etc. work on arrays with non-standard strides? For the script: --- start of code --- impor

[Numpy-discussion] ctypes and numpy

2010-12-07 Thread Sebastian Walter
Hello all, I'd like to call a Python function from a C++ code. The Python function has numpy.ndarrays as input. I figured that the easiest way would be to use ctypes. However, I can't get numpy and ctypes to work together. --- run.c #include #include void run(PyArrayObje

Re: [Numpy-discussion] numpy speed question

2010-11-25 Thread Sebastian Walter
using math.cos instead of numpy.cos should be much faster. I believe this is a known issue of numpy. On Thu, Nov 25, 2010 at 11:13 AM, Jean-Luc Menut wrote: > Hello all, > > I have a little question about the speed of numpy vs IDL 7.0. I did a > very simple little check by computing just a cosine

Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Sebastian Walter
On Tue, Nov 23, 2010 at 2:50 PM, Gael Varoquaux wrote: > On Tue, Nov 23, 2010 at 02:47:10PM +0100, Sebastian Walter wrote: >> Well, I don't know what the best method is to solve your problem, so >> take the following with a grain of salt: >> Wouldn't it be

Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Sebastian Walter
On Tue, Nov 23, 2010 at 11:43 AM, Gael Varoquaux wrote: > On Tue, Nov 23, 2010 at 11:37:02AM +0100, Sebastian Walter wrote: >> >> min_x f(x) >> >> s.t.   lo <= Ax + b <= up >> >>            0 = g(x) >> >>            0 <= h(x) > &g

Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Sebastian Walter
On Tue, Nov 23, 2010 at 11:17 AM, Gael Varoquaux wrote: > On Tue, Nov 23, 2010 at 11:13:23AM +0100, Sebastian Walter wrote: >> I'm not familiar with dichotomy optimization. >> Several techniques have been proposed to solve the problem: genetic >> algorithms, simulated

Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Sebastian Walter
Hello Gael, On Tue, Nov 23, 2010 at 10:27 AM, Gael Varoquaux wrote: > On Tue, Nov 23, 2010 at 10:18:50AM +0100, Matthieu Brucher wrote: >> > The problem is that I can't tell the Nelder-Mead that the smallest jump >> > it should attempt is .5. I can set xtol to .5, but it still attemps jumps >> >

Re: [Numpy-discussion] problems with numdifftools

2010-10-28 Thread Sebastian Walter
hmm, I have just realized that I forgot to upload the new version to pypi: it is now available on http://pypi.python.org/pypi/algopy On Thu, Oct 28, 2010 at 10:47 AM, Sebastian Walter wrote: > On Wed, Oct 27, 2010 at 10:50 PM, Nicolai Heitz wrote: >> m 27.10.2010 02:02, schrieb

Re: [Numpy-discussion] problems with numdifftools

2010-10-28 Thread Sebastian Walter
On Wed, Oct 27, 2010 at 10:50 PM, Nicolai Heitz wrote: > m 27.10.2010 02:02, schrieb Sebastian Walter: > >>  On Wed, Oct 27, 2010 at 12:59 AM, Pauli Virtanen   wrote: >>>  Tue, 26 Oct 2010 14:24:39 -0700, Nicolai Heitz wrote: >>>>>    http://mail.scipy.org

Re: [Numpy-discussion] problems with numdifftools

2010-10-27 Thread Sebastian Walter
On Wed, Oct 27, 2010 at 12:59 AM, Pauli Virtanen wrote: > Tue, 26 Oct 2010 14:24:39 -0700, Nicolai Heitz wrote: >> >  http://mail.scipy.org/mailman/listinfo/scipy-user >> >> I contacted them already but they didn't responded so far and I was >> forwarded to that list which was supposed to be more

Re: [Numpy-discussion] Asking for opinions: Priops

2010-09-22 Thread Sebastian Walter
Hello Friedrich, I have read your proposal. You describe issues that I have also encountered several times. I believe that your priops approach would be an improvement over the current overloading of binary operators. That being said, I think the issue is not so much numpy but rather the way Pytho

Re: [Numpy-discussion] inversion of large matrices

2010-09-01 Thread Sebastian Walter
is it really the covariance matrix you want to invert? Or do you want to compute something like x^T C^{-1} x, where x is an array of size N and C an array of size (N,N)? It would also be interesting to know how the covariance matrix gets computed and what its condition number is, at least approxim

Re: [Numpy-discussion] [ANN] ALGOPY 0.21, algorithmic differentiation in Python

2010-08-01 Thread Sebastian Walter
,6))) In [15]: y = x*z In [16]: z.shape Out[16]: (4, 1, 6) In [17]: y.shape Out[17]: (2, 3, 4, 5, 6) Sebastian > > John > > On Sun, Aug 1, 2010 at 5:05 AM, Sebastian Walter > wrote: >> >> I'm happy to announce the first official release of ALGOPY in version &

[Numpy-discussion] [ANN] ALGOPY 0.21, algorithmic differentiation in Python

2010-08-01 Thread Sebastian Walter
I'm happy to announce the first official release of ALGOPY in version 0.2.1. Rationale: The purpose of ALGOPY is the evaluation of higher-order derivatives in the forward and reverse mode of Algorithmic Differentiation (AD) using univariate Taylor polynomial arithmetic. Particular focus a

Re: [Numpy-discussion] Tensor contraction

2010-06-14 Thread Sebastian Walter
On Sun, Jun 13, 2010 at 8:11 PM, Alan Bromborsky wrote: > Friedrich Romstedt wrote: >> 2010/6/13 Pauli Virtanen : >> >>> def tensor_contraction_single(tensor, dimensions): >>>    """Perform a single tensor contraction over the dimensions given""" >>>    swap = [x for x in range(tensor.ndim) >>>  

Re: [Numpy-discussion] NumPy re-factoring project

2010-06-12 Thread Sebastian Walter
On Sat, Jun 12, 2010 at 3:57 PM, David Cournapeau wrote: > On Sat, Jun 12, 2010 at 10:27 PM, Sebastian Walter > wrote: >> On Thu, Jun 10, 2010 at 6:48 PM, Sturla Molden wrote: >>> >>> I have a few radical suggestions: >>> >>> 1. Use ctypes as g

Re: [Numpy-discussion] NumPy re-factoring project

2010-06-12 Thread Sebastian Walter
On Thu, Jun 10, 2010 at 6:48 PM, Sturla Molden wrote: > > I have a few radical suggestions: > > 1. Use ctypes as glue to the core DLL, so we can completely forget about > refcounts and similar mess. Why put manual reference counting and error > handling in the core? It's stupid. I totally agree,

Re: [Numpy-discussion] Introduction to Scott, Jason, and (possibly) others from Enthought

2010-05-26 Thread Sebastian Walter
On Wed, May 26, 2010 at 12:31 PM, Pauli Virtanen wrote: > Wed, 26 May 2010 10:50:19 +0200, Sebastian Walter wrote: >> I'm a potential user of the C-API and therefore I'm very interested in >> the outcome. >> In the previous discussio

Re: [Numpy-discussion] Introduction to Scott, Jason, and (possibly) others from Enthought

2010-05-26 Thread Sebastian Walter
I'm a potential user of the C-API and therefore I'm very interested in the outcome. In the previous discussion (http://comments.gmane.org/gmane.comp.python.numeric.general/37409) many different views on what the new C-API "should" be were expressed. Naturally, I wonder if the new C-API will be use

Re: [Numpy-discussion] Improvement of performance

2010-05-04 Thread Sebastian Walter
playing devil's advocate I'd say use Algorithmic Differentiation instead of finite differences ;) that would probably speed things up quite a lot. On Tue, May 4, 2010 at 11:36 PM, Davide Lasagna wrote: > If your x data are equispaced I would do something like this > def derive( func, x): > """

Re: [Numpy-discussion] Math Library

2010-04-18 Thread Sebastian Walter
On Tue, Apr 13, 2010 at 12:29 AM, Charles R Harris wrote: > > > On Mon, Apr 12, 2010 at 4:19 PM, Travis Oliphant > wrote: >> >> On Apr 11, 2010, at 4:17 PM, Sebastian Walter wrote: >> >> > >> > Ermm, the reply above is quite poor, sorry about

Re: [Numpy-discussion] Math Library

2010-04-11 Thread Sebastian Walter
On Sun, Apr 11, 2010 at 12:59 PM, Sebastian Walter wrote: > On Tue, Apr 6, 2010 at 9:16 PM, Travis Oliphant > wrote: >> >> On Apr 6, 2010, at 9:08 AM, David Cournapeau wrote: >> >> Hi Travis, >> >> On Tue, Apr 6, 2010 at 7:43 AM, Travis Oliphant >&

Re: [Numpy-discussion] Math Library

2010-04-11 Thread Sebastian Walter
On Tue, Apr 6, 2010 at 9:16 PM, Travis Oliphant wrote: > > On Apr 6, 2010, at 9:08 AM, David Cournapeau wrote: > > Hi Travis, > > On Tue, Apr 6, 2010 at 7:43 AM, Travis Oliphant > wrote: > > > I should have some time over the next couple of weeks, and I am very > > interested in refactoring the N

Re: [Numpy-discussion] [OT] Starving CPUs article featured in IEEE's ComputingNow portal

2010-03-21 Thread Sebastian Walter
On Fri, Mar 19, 2010 at 11:18 PM, David Warde-Farley wrote: > On 19-Mar-10, at 1:13 PM, Anne Archibald wrote: > >> I'm not knocking numpy; it does (almost) the best it can. (I'm not >> sure of the optimality of the order in which ufuncs are executed; I >> think some optimizations there are possib

Re: [Numpy-discussion] evaluating a function in a matrix element???

2010-03-19 Thread Sebastian Walter
On Thu, Mar 18, 2010 at 7:01 AM, Frank Horowitz wrote: > Dear All, > > I'm working on a piece of optimisation code where it turns out to be > mathematically convenient to have a matrix where a few pre-chosen elements > must be computed at evaluation time for the dot product (i.e. matrix > multi

Re: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions

2010-02-28 Thread Sebastian Walter
On Sun, Feb 28, 2010 at 9:06 PM, Friedrich Romstedt wrote: > 2010/2/28 Sebastian Walter : >>> I think I can use that to make my upy accept arbitrary functions, but >>> how do you apply sin() to a TTP? >> >> perform truncated Taylor expansion of  [y]_D = sin([x

Re: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions

2010-02-28 Thread Sebastian Walter
On Sun, Feb 28, 2010 at 12:47 AM, Friedrich Romstedt wrote: > Sebastian, and, please, be not offended by what I wrote.  I regret a > bit my jokes ... It's simply too late at night. no offense taken > > Friedrich > ___ > NumPy-Discussion mailing list > Nu

Re: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions

2010-02-28 Thread Sebastian Walter
On Sun, Feb 28, 2010 at 12:30 AM, Friedrich Romstedt wrote: > 2010/2/27 Sebastian Walter : >> I'm sorry this comment turns out to be confusing. > > Maybe it's not important. > >> It has apparently quite the contrary effect of what I wanted to achieve: >> Si

Re: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions

2010-02-27 Thread Sebastian Walter
On Sat, Feb 27, 2010 at 11:11 PM, Friedrich Romstedt wrote: > Ok, it took me about one hour, but here they are: Fourier-accelerated > polynomials. that's the spirit! ;) > >> python > Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] on win32 > Type "help", "copyright", "credi

Re: [Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions

2010-02-27 Thread Sebastian Walter
On Sat, Feb 27, 2010 at 10:02 PM, Friedrich Romstedt wrote: > To the core developers (of numpy.polynomial e.g.): Skip the mess and > read the last paragraph. > > The other things I will post back to the list, where they belong to. > I just didn't want to have off-topic discussion there. > >> I wan

[Numpy-discussion] [ANN]: Taylorpoly, an implementation of vectorized Taylor polynomial operations and request for opinions

2010-02-27 Thread Sebastian Walter
Announcement: --- I have started to implement vectorized univariate truncated Taylor polynomial operations (add,sub,mul,div,sin,exp,...) in ANSI-C. The interface to python is implemented by using numpy.ndarray's ctypes functionality. Unit tests are implement using nose. It is B

Re: [Numpy-discussion] linalg.eig getting the original matrix back ?

2010-01-15 Thread Sebastian Walter
numpy.linalg.eig guarantees to return right eigenvectors. evec is not necessarily an orthonormal matrix when there are eigenvalues with multiplicity >1. For symmetrical matrices you'll have mutually orthogonal eigenspaces but each eigenspace might be spanned by vectors that are not orthogonal to ea

Re: [Numpy-discussion] wrong casting of augmented assignment statements

2010-01-14 Thread Sebastian Walter
0.60908423 0.79772095] x2= [1.0 2.0 3.0] z2=x2*y2 [0.493770370747 1.21816846399 2.39316283707] y2*=x2 [ 0.49377037 1.21816846 2.39316284] end output - On Tue, Jan 12, 2010 at 7:38 PM, Robert Kern wrote: > On Tue, Jan 12, 2010 at 12:31, Sebastian Walter > wrot

Re: [Numpy-discussion] wrong casting of augmented assignment statements

2010-01-12 Thread Sebastian Walter
On Tue, Jan 12, 2010 at 7:38 PM, Robert Kern wrote: > On Tue, Jan 12, 2010 at 12:31, Sebastian Walter > wrote: >> On Tue, Jan 12, 2010 at 7:09 PM, Robert Kern wrote: >>> On Tue, Jan 12, 2010 at 12:05, Sebastian Walter >>> wrote: >>>> Hello, >>>

Re: [Numpy-discussion] wrong casting of augmented assignment statements

2010-01-12 Thread Sebastian Walter
On Tue, Jan 12, 2010 at 7:09 PM, Robert Kern wrote: > On Tue, Jan 12, 2010 at 12:05, Sebastian Walter > wrote: >> Hello, >> I have a question about the augmented assignment statements *=, +=, etc. >> Apparently, the casting of types is not working correctly. Is this

[Numpy-discussion] wrong casting of augmented assignment statements

2010-01-12 Thread Sebastian Walter
Hello, I have a question about the augmented assignment statements *=, +=, etc. Apparently, the casting of types is not working correctly. Is this known resp. intended behavior of numpy? (I'm using numpy.__version__ = '1.4.0.dev7039' on this machine but I remember a recent checkout of numpy yielded

Re: [Numpy-discussion] Another suggestion for making numpy's functions generic

2009-10-20 Thread Sebastian Walter
... where each function has a preprocessing part and a post processing part. After the preprocessing call the original ufuncs on the base class object, e.g. __add__ Sebastian On Mon, Oct 19, 2009 at 1:55 PM, Darren Dale wrote: > On Mon, Oct 19, 2009 at 3:10 AM, Sebastian Walter > wrote: &g

Re: [Numpy-discussion] Another suggestion for making numpy's functions generic

2009-10-20 Thread Sebastian Walter
On Tue, Oct 20, 2009 at 5:45 AM, Anne Archibald wrote: > 2009/10/19 Sebastian Walter : >> >> I'm all for generic (u)funcs since they might come handy for me since >> I'm doing lots of operation on arrays of polynomials. > > Just as a side note, if yo

Re: [Numpy-discussion] Another suggestion for making numpy's functions generic

2009-10-19 Thread Sebastian Walter
On Sat, Oct 17, 2009 at 2:49 PM, Darren Dale wrote: > numpy's functions, especially ufuncs, have had some ability to support > subclasses through the ndarray.__array_wrap__ method, which provides > masked arrays or quantities (for example) with an opportunity to set > the class and metadata of the

Re: [Numpy-discussion] poly class question

2009-10-05 Thread Sebastian Walter
On Mon, Oct 5, 2009 at 4:52 PM, wrote: > On Mon, Oct 5, 2009 at 5:37 AM, Sebastian Walter > wrote: >> On Fri, Oct 2, 2009 at 10:40 PM, wrote: >>> On Fri, Oct 2, 2009 at 3:38 PM, Charles R Harris >>> wrote: >>>> >>>> >>>>

Re: [Numpy-discussion] poly class question

2009-10-05 Thread Sebastian Walter
On Fri, Oct 2, 2009 at 10:40 PM, wrote: > On Fri, Oct 2, 2009 at 3:38 PM, Charles R Harris > wrote: >> >> >> On Fri, Oct 2, 2009 at 12:30 PM, wrote: >>> >>> On Fri, Oct 2, 2009 at 2:09 PM, Charles R Harris >>> wrote: >>> > >>> > >>> > On Fri, Oct 2, 2009 at 11:35 AM, Charles R Harris >>> > wr

Re: [Numpy-discussion] polynomial ring dtype

2009-09-22 Thread Sebastian Walter
sorry if this a duplicate, it seems that my last mail got lost... is there something to take care about when sending a mail to the numpy mailing list? On Tue, Sep 22, 2009 at 9:42 AM, Sebastian Walter wrote: > This is somewhat similar to the question about fixed-point arithmetic > earl

[Numpy-discussion] polynomial ring dtype

2009-09-22 Thread Sebastian Walter
This is somewhat similar to the question about fixed-point arithmetic earlier on this mailing list. I need to do computations on arrays whose elements are truncated polynomials. At the momement, I have implemented the univariate truncated polynomials as objects of a class UTPS. The class basicall

Re: [Numpy-discussion] why does b[:-0] not work, and is there an elegant solution?

2009-08-19 Thread Sebastian Walter
In [45]: x[: -0 or None] Out[45]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) In [46]: x[: -1 or None] Out[46]: array([0, 1, 2, 3, 4, 5, 6, 7, 8]) works fine without slice() On Wed, Aug 19, 2009 at 2:54 PM, Citi, Luca wrote: > Another solution (elegant?? readable??) : >>> x[slice(-n or None)] # with n

Re: [Numpy-discussion] why does b[:-0] not work, and is there an elegant solution?

2009-08-19 Thread Sebastian Walter
I'm sure there is a better solution: In [1]: x = numpy.array([i for i in range(10)]) In [2]: foo = lambda n: -n if n!=0 else None : In [3]: x[:foo(1)] Out[3]: array([0, 1, 2, 3, 4, 5, 6, 7, 8]) In [4]: x[:foo(0)] Out[4]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) On Wed, Aug 19, 2009

Re: [Numpy-discussion] Multiplying Python float to numpy.array of objects works but fails with a numpy.float64, numpy Bug?

2009-06-07 Thread Sebastian Walter
th Goodman wrote: >> >> On Tue, Jun 2, 2009 at 1:42 AM, Sebastian Walter >> wrote: >> > Hello, >> > Multiplying a Python float to a numpy.array of objects works flawlessly >> > but not with a numpy.float64 . >> > I tried  numpy version '1.0.

Re: [Numpy-discussion] performance matrix multiplication vs. matlab

2009-06-05 Thread Sebastian Walter
On Fri, Jun 5, 2009 at 11:58 AM, David Cournapeau wrote: > Sebastian Walter wrote: >> On Thu, Jun 4, 2009 at 10:56 PM, Chris Colbert wrote: >> >>> I should update after reading the thread Sebastian linked: >>> >>> The current 1.3 version of numpy (don&#x

Re: [Numpy-discussion] performance matrix multiplication vs. matlab

2009-06-05 Thread Sebastian Walter
On Thu, Jun 4, 2009 at 10:56 PM, Chris Colbert wrote: > I should update after reading the thread Sebastian linked: > > The current 1.3 version of numpy (don't know about previous versions) uses > the optimized Atlas BLAS routines for numpy.dot() if numpy was compiled with > these libraries. I've ve

Re: [Numpy-discussion] performance matrix multiplication vs. matlab

2009-06-04 Thread Sebastian Walter
Have a look at this thread: http://www.mail-archive.com/numpy-discussion@scipy.org/msg13085.html The speed difference is probably due to the fact that the matrix multiplication does not call optimized an optimized blas routine, e.g. the ATLAS blas. Sebastian On Thu, Jun 4, 2009 at 3:36 PM, Davi

Re: [Numpy-discussion] Multiplying Python float to numpy.array of objects works but fails with a numpy.float64, numpy Bug?

2009-06-02 Thread Sebastian Walter
On Tue, Jun 2, 2009 at 4:18 PM, Darren Dale wrote: > > > On Tue, Jun 2, 2009 at 10:09 AM, Keith Goodman wrote: >> >> On Tue, Jun 2, 2009 at 1:42 AM, Sebastian Walter >> wrote: >> > Hello, >> > Multiplying a Python float to a numpy.array of

[Numpy-discussion] Multiplying Python float to numpy.array of objects works but fails with a numpy.float64, numpy Bug?

2009-06-02 Thread Sebastian Walter
Hello, Multiplying a Python float to a numpy.array of objects works flawlessly but not with a numpy.float64 . I tried numpy version '1.0.4' on a 32 bit Linux and '1.2.1' on a 64 bit Linux: both raise the same exception. Is this a (known) bug? -- test.py -

Re: [Numpy-discussion] sparse matrix dot product

2009-05-28 Thread Sebastian Walter
I'd be interested to see the benchmark ;) On Thu, May 28, 2009 at 4:14 PM, Nicolas Rougier wrote: > > Hi, > > I'm now testing dot product and using the following: > > import numpy as np, scipy.sparse as sp > > A = np.matrix(np.zeros((5,10))) > B = np.zeros((10,1)) > print (A*B).shape > print np

Re: [Numpy-discussion] linear algebra help

2009-05-18 Thread Sebastian Walter
2009/5/18 Stéfan van der Walt : > 2009/5/18 Sebastian Walter : >> B = numpy.dot(A.T, A) > > This multiplication should be avoided whenever possible -- you are > effectively squaring your condition number. Indeed. > > In the case where you have more rows than columns,

Re: [Numpy-discussion] linear algebra help

2009-05-18 Thread Sebastian Walter
Alternatively, to solve A x = b you could do import numpy import numpy.linalg B = numpy.dot(A.T, A) c = numpy.dot(A.T, b) x = numpy.linalg(B,c) This is not the most efficient way to do it but at least you know exactly what's going on in your code. On Sun, May 17, 2009 at 7:21 PM, wrote: > O

Re: [Numpy-discussion] (no subject)

2009-05-14 Thread Sebastian Walter
On Wed, May 13, 2009 at 10:18 PM, David J Strozzi wrote: > Hi, > > [You may want to edit the numpy homepage numpy.scipy.org to tell > people they must subscribe to post, and adding a link to > http://www.scipy.org/Mailing_Lists] > > > Many of you probably know of the interpreter yorick by Dave Mun

Re: [Numpy-discussion] hairy optimization problem

2009-05-07 Thread Sebastian Walter
hi mathew, 1) what does it mean if a value is None? I.e., what is larger: None or 3? Then first thing I would do is convert the None to a number. 2) Are your arrays integer arrays or double arrays? It's much easier if they are doubles because then you could use standard methods for NLP problems,

Re: [Numpy-discussion] difficult optimization problem

2009-05-06 Thread Sebastian Walter
I tried looking at your question but ... kind of unusable without some documentation. You need to give at least the following information: what kind of optimization problem? LP,NLP, Mixed Integer LP, Stochastic, semiinfinite, semidefinite? Most solvers require the problem in the following form

Re: [Numpy-discussion] MemoryError for computing eigen-vector on 10, 000*10, 000 matrix

2009-04-29 Thread Sebastian Walter
+1 to that Often, one is only interested in the largest or smallest eigenvalues/vectors of a problem. Then the method of choice are iterative solvers, e.g. Lanczos algorithm. If only the largest eigenvalue/vector is needed, you could try the power iteration. On Wed, Apr 29, 2009 at 7:49 AM, Zh

Re: [Numpy-discussion] survey of freely available software for the solution of linear algebra problems

2009-04-14 Thread Sebastian Walter
I'm looking for a library that has linear algebra routines on objects of a self-written class (with overloaded operators +,-,*,/). An example would be truncated Taylor polynomials. One can perform all operations +,-,*,/ on truncated Taylor polynomials. Often it is also necessary to perform linear

Re: [Numpy-discussion] Automatic differentiation (was Re: second-order gradient)

2009-03-11 Thread Sebastian Walter
There are several possibilities, some of them are listed on http://en.wikipedia.org/wiki/Automatic_differentiation == pycppad http://www.seanet.com/~bradbell/pycppad/index.xml pycppad is a wrapper of the C++ library CppAD ( http://www.coin-or.org/CppAD/ ) the wrapper can do up to second order de

Re: [Numpy-discussion] using numpy functions on an array of objects

2009-02-01 Thread Sebastian Walter
On Sun, Feb 1, 2009 at 12:24 AM, Robert Kern wrote: > On Sat, Jan 31, 2009 at 10:30, Sebastian Walter > wrote: >> Wouldn't it be nice to have numpy a little more generic? >> All that would be needed was a little check of the arguments. >> >> If I do: >> n

Re: [Numpy-discussion] using numpy functions on an array of objects

2009-01-31 Thread Sebastian Walter
Wouldn't it be nice to have numpy a little more generic? All that would be needed was a little check of the arguments. If I do: numpy.trace(4) shouldn't numpy be smart enough to regard the 4 as a 1x1 array? numpy.sin(4) works! and if x = my_class(4) wouldn't it be nice if numpy.trace(x) would c

[Numpy-discussion] using numpy functions on an array of objects

2009-01-30 Thread Sebastian Walter
Hey, What is the best solution to get this code working? Anyone a good idea? -- test.py --- import numpy import numpy.linalg class afloat: def __init__(self,x): self.x = x def __add__(self,rhs):