[Numpy-discussion] higher accuracy in diagonialzation

2014-10-27 Thread Sunghwan Choi
Dear all,

I am now diagonalizing a 200-by-200 symmetric matrix. But the two methods,
scipy.linalg.eigh and numpy.linalg.eigh give significantly different result.
The results from two methods are different within 10^-4 order. One of them
is inaccurate or both two of them are inaccurate within that range. Which
one is more accurate? or Are there any ways to control the accuracy for
diagonalization? If you have some idea please let me know.

 

Sunghwan Choi

Ph. D. candidator

Department of Chemistry

KAIST (South Korea)

 

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] higher accuracy in diagonialzation

2014-10-27 Thread Stefan van der Walt
On 2014-10-27 10:37:58, Sunghwan Choi sunghwancho...@gmail.com wrote:
 I am now diagonalizing a 200-by-200 symmetric matrix. But the two methods,
 scipy.linalg.eigh and numpy.linalg.eigh give significantly different result.
 The results from two methods are different within 10^-4 order. One of them
 is inaccurate or both two of them are inaccurate within that range. Which
 one is more accurate? or Are there any ways to control the accuracy for
 diagonalization? If you have some idea please let me know.

My first (naive) attempt would be to set up a matrix, M, in sympy and
then use M.diagonalize() to find the symbolic expression of the
solution.  You can then do the same numerically to see which method
yields a result closest to the desired answer.

Stéfan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy 1.9.1 release candidate

2014-10-27 Thread Jens Nielsen
Thanks Julian,

Just confirming that this (as expected) solves the issues that we have seen
with gradient in Matplotlib with 1.9.0


best regards

Jens


On Sun, Oct 26, 2014 at 5:13 PM, Julian Taylor 
jtaylor.deb...@googlemail.com wrote:

 Hi,

 We have finally finished the first release candidate of NumOy 1.9.1,
 sorry for the week delay.
 The 1.9.1 release will as usual be a bugfix only release to the 1.9.x
 series.
 The tarballs and win32 binaries are available on sourceforge:
 https://sourceforge.net/projects/numpy/files/NumPy/1.9.1rc1/

 If no regressions show up the final release is planned next week.
 The upgrade is recommended for all users of the 1.9.x series.

 Following issues have been fixed:
 * gh-5184: restore linear edge behaviour of gradient to as it was in  1.9.
   The second order behaviour is available via the `edge_order` keyword
 * gh-4007: workaround Accelerate sgemv crash on OSX 10.9
 * gh-5100: restore object dtype inference from iterable objects without
 `len()`
 * gh-5163: avoid gcc-4.1.2 (red hat 5) miscompilation causing a crash
 * gh-5138: fix nanmedian on arrays containing inf
 * gh-5203: copy inherited masks in MaskedArray.__array_finalize__
 * gh-2317: genfromtxt did not handle filling_values=0 correctly
 * gh-5067: restore api of npy_PyFile_DupClose in python2
 * gh-5063: cannot convert invalid sequence index to tuple
 * gh-5082: Segmentation fault with argmin() on unicode arrays
 * gh-5095: don't propagate subtypes from np.where
 * gh-5104: np.inner segfaults with SciPy's sparse matrices
 * gh-5136: Import dummy_threading if importing threading fails
 * gh-5148: Make numpy import when run with Python flag '-OO'
 * gh-5147: Einsum double contraction in particular order causes ValueError
 * gh-479: Make f2py work with intent(in out)
 * gh-5170: Make python2 .npy files readable in python3
 * gh-5027: Use 'll' as the default length specifier for long long
 * gh-4896: fix build error with MSVC 2013 caused by C99 complex support
 * gh-4465: Make PyArray_PutTo respect writeable flag
 * gh-5225: fix crash when using arange on datetime without dtype set
 * gh-5231: fix build in c99 mode

 Source tarballs, windows installers and release notes can be found at
 https://sourceforge.net/projects/numpy/files/NumPy/1.9.1rc1/

 Cheers,
 Julian Taylor




 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] higher accuracy in diagonialzation

2014-10-27 Thread Daπid
On 27 October 2014 09:37, Sunghwan Choi sunghwancho...@gmail.com wrote:

 One of them is inaccurate or both two of them are inaccurate within that
 range. Which one is more accurate?


You can check it yourself using the eigenvectors. The cosine distance
between v and M.dot(v) will give you the error in the eigenvectors, and the
difference between ||lambda*v|| and ||M.dot(v)|| the error in the
eigenvalue. I would also check the condition numbers, maybe your matrix is
just not well conditioned. You would have to look at preconditioners.


/David.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] multi-dimensional c++ proposal

2014-10-27 Thread Neal Becker
The multi-dimensional c++ stuff is interesting (about time!)

http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2014/n3851.pdf

-- 
-- Those who don't understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Choosing between NumPy and SciPy functions

2014-10-27 Thread D. Michael McFarland
A recent post raised a question about differences in results obtained
with numpy.linalg.eigh() and scipy.linalg.eigh(), documented at
http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html#numpy.linalg.eigh
and
http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigh.html#scipy.linalg.eigh,
respectively.  It is clear that these functions address different
mathematical problems (among other things, the SciPy routine can solve
the generalized as well as standard eigenproblems); I am not concerned
here with numerical differences in the results for problems both should
be able to solve (the author of the original post received useful
replies in that thread).

What I would like to ask about is the situation this illustrates, where
both NumPy and SciPy provide similar functionality (sometimes identical,
to judge by the documentation).  Is there some guidance on which is to
be preferred?  I could argue that using only NumPy when possible avoids
unnecessary dependence on SciPy in some code, or that using SciPy
consistently makes for a single interface and so is less error prone.
Is there a rule of thumb for cases where SciPy names shadow NumPy names?

I've used Python for a long time, but have only recently returned to
doing serious numerical work with it.  The tools are very much improved,
but sometimes, like now, I feel I'm missing the obvious.  I would
appreciate pointers to any relevant documentation, or just a summary of
conventional wisdom on the topic.

Regards,
Michael
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Fwd: numpy.i and std::complex

2014-10-27 Thread Glen Mabey
Hello,

I was very excited to learn about numpy.i for easy numpy+swigification of C 
code -- it's really handy.

Knowing that swig wraps C code, I wasn't too surprised that there was the issue 
with complex data types (as described at 
http://docs.scipy.org/doc/numpy/reference/swig.interface-file.html#other-common-types-complex),
 but still it was pretty disappointing because most of my data is complex, and 
I'm invoking methods written to use C++'s std::complex class.

After quite a bit of puzzling and not much help from previous mailing list 
posts, I created this very brief but very useful file, which I call 
numpy_std_complex.i --

/* -*- C -*-  (not really, but good for syntax highlighting) */
#ifdef SWIGPYTHON

%include numpy.i

%include std_complex.i

%numpy_typemaps(std::complexfloat,  NPY_CFLOAT , int)
%numpy_typemaps(std::complexdouble, NPY_CDOUBLE, int)

#endif /* SWIGPYTHON */


I'd really like for this to be included alongside numpy.i -- but maybe I 
overestimate the number of numpy users who use complex data (let your voice be 
heard!) and who also end up using std::complex in C++ land.

Or if anyone wants to improve upon this usage I would be very happy to hear 
about what I'm missing.

I'm sure there's a documented way to submit this file to the git repo, but let 
me simultaneously ask whether list subscribers think this is worthwhile and ask 
someone to add+push it for me …

Thanks,
Glen Mabey
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.i and std::complex

2014-10-27 Thread Sturla Molden
Glen Mabey gma...@swri.org wrote:
 
 I'd really like for this to be included alongside numpy.i -- but maybe I
 overestimate the number of numpy users who use complex data (let your
 voice be heard!) and who also end up using std::complex in C++ land.

I don't think you do. But perhaps you overestimate the number of NumPy
users who use Swig?

Cython seems to be the preferred wrapping tool today, and it understands
complex numbers:

cdef double complex J = 0.0 + 1j

If you tell Cython to emit C++, this will result in code that uses
std::complexdouble. 

Sturla

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Accept numpy arrays on arguments of numpy.testing.assert_approx_equal()

2014-10-27 Thread Edison Gustavo Muenz
I’ve implemented support for numpy.arrays for the arguments of
numpy.testing.assert_approx_equal() and have issued a pull-request
https://github.com/numpy/numpy/pull/5219 on Github.

I don’t know if I should be sending the message to the list to notify about
this, but since I’m new to the *numpy-dev* list I think it never hurts to
say hi :)
​
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.i and std::complex

2014-10-27 Thread Glen Mabey

On Oct 27, 2014, at 10:45 AM, Sturla Molden sturla.mol...@gmail.com
 wrote:

 Glen Mabey gma...@swri.org wrote:
 
 I'd really like for this to be included alongside numpy.i -- but maybe I
 overestimate the number of numpy users who use complex data (let your
 voice be heard!) and who also end up using std::complex in C++ land.
 
 I don't think you do. But perhaps you overestimate the number of NumPy
 users who use Swig?

Likely so.

 Cython seems to be the preferred wrapping tool today, and it understands
 complex numbers:
 
cdef double complex J = 0.0 + 1j
 
 If you tell Cython to emit C++, this will result in code that uses
 std::complexdouble. 

I chose swig after reviewing the options listed here, and I didn't see cython 
on the list:

http://docs.scipy.org/doc/numpy/user/c-info.python-as-glue.html

I guess that's because cython is different language, right?  So, if I want to 
interactively call C++ functions from say ipython, then is cython really an 
option?

Thanks for the feedback --
Glen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [EXTERNAL] Fwd: numpy.i and std::complex

2014-10-27 Thread Bill Spotz
Glen,

Supporting std::complex was just low enough priority for me that I decided to 
wait until someone expressed interest ... and now, many years later, someone 
finally has.

I would be happy to include this into numpy.i, but I would like to see some 
tests in the numpy repository demonstrating that it works.  These could be 
relatively short and simple, and since float and double are the only scalar 
data types that I could foresee supporting, there would not be a need for 
testing the large numbers of data types that the other tests cover.  

I would also want to protect the references to C++ objects with '#ifdef 
__cplusplus', but that is easy enough.

-Bill

On Oct 27, 2014, at 9:06 AM, Glen Mabey gma...@swri.org wrote:

 Hello,
 
 I was very excited to learn about numpy.i for easy numpy+swigification of C 
 code -- it's really handy.
 
 Knowing that swig wraps C code, I wasn't too surprised that there was the 
 issue with complex data types (as described at 
 http://docs.scipy.org/doc/numpy/reference/swig.interface-file.html#other-common-types-complex),
  but still it was pretty disappointing because most of my data is complex, 
 and I'm invoking methods written to use C++'s std::complex class.
 
 After quite a bit of puzzling and not much help from previous mailing list 
 posts, I created this very brief but very useful file, which I call 
 numpy_std_complex.i --
 
 /* -*- C -*-  (not really, but good for syntax highlighting) */
 #ifdef SWIGPYTHON
 
 %include numpy.i
 
 %include std_complex.i
 
 %numpy_typemaps(std::complexfloat,  NPY_CFLOAT , int)
 %numpy_typemaps(std::complexdouble, NPY_CDOUBLE, int)
 
 #endif /* SWIGPYTHON */
 
 
 I'd really like for this to be included alongside numpy.i -- but maybe I 
 overestimate the number of numpy users who use complex data (let your voice 
 be heard!) and who also end up using std::complex in C++ land.
 
 Or if anyone wants to improve upon this usage I would be very happy to hear 
 about what I'm missing.
 
 I'm sure there's a documented way to submit this file to the git repo, but 
 let me simultaneously ask whether list subscribers think this is worthwhile 
 and ask someone to add+push it for me …
 
 Thanks,
 Glen Mabey
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

** Bill Spotz  **
** Sandia National Laboratories  Voice: (505)845-0170  **
** P.O. Box 5800 Fax:   (505)284-0154  **
** Albuquerque, NM 87185-0370Email: wfsp...@sandia.gov **

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [EXTERNAL] Re: numpy.i and std::complex

2014-10-27 Thread Bill Spotz
Python is its own language, but it allows you to import C and C++ code, thus 
creating an interface to these languages.  Just as with SWIG, you would import 
a module written in Cython that gives you access to underlying C/C++ code.

Cython is very nice for a lot of applications, but it is not the best tool for 
every job of designing an interface.  SWIG is still preferable if you have a 
large existing code base to wrap or if you want to support more target 
languages than just Python.  I have a specific need for cross-language 
polymorphism, and SWIG is much better at that than Cython is.  It all depends.

Looks like somebody needs to update the c-info.python-as-glue.html page.

-Bill

On Oct 27, 2014, at 10:04 AM, Glen Mabey gma...@swri.org wrote:

 On Oct 27, 2014, at 10:45 AM, Sturla Molden sturla.mol...@gmail.com
 wrote:
 
 Glen Mabey gma...@swri.org wrote:
 
 I'd really like for this to be included alongside numpy.i -- but maybe I
 overestimate the number of numpy users who use complex data (let your
 voice be heard!) and who also end up using std::complex in C++ land.
 
 I don't think you do. But perhaps you overestimate the number of NumPy
 users who use Swig?
 
 Likely so.
 
 Cython seems to be the preferred wrapping tool today, and it understands
 complex numbers:
 
   cdef double complex J = 0.0 + 1j
 
 If you tell Cython to emit C++, this will result in code that uses
 std::complexdouble. 
 
 I chose swig after reviewing the options listed here, and I didn't see cython 
 on the list:
 
 http://docs.scipy.org/doc/numpy/user/c-info.python-as-glue.html
 
 I guess that's because cython is different language, right?  So, if I want to 
 interactively call C++ functions from say ipython, then is cython really an 
 option?
 
 Thanks for the feedback --
 Glen
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

** Bill Spotz  **
** Sandia National Laboratories  Voice: (505)845-0170  **
** P.O. Box 5800 Fax:   (505)284-0154  **
** Albuquerque, NM 87185-0370Email: wfsp...@sandia.gov **





___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [EXTERNAL] Re: numpy.i and std::complex

2014-10-27 Thread Bill Spotz
Oops, I meant 'Cython is its own language,' not Python (although Python 
qualifies, too, just not in context).

Also, Pyrex, listed in the c-info.python-as-glue.html page, was the pre-cursor 
to Cython.

-Bill

On Oct 27, 2014, at 10:20 AM, Bill Spotz wfsp...@sandia.gov wrote:

 Python is its own language, but it allows you to import C and C++ code, thus 
 creating an interface to these languages.  Just as with SWIG, you would 
 import a module written in Cython that gives you access to underlying C/C++ 
 code.
 
 Cython is very nice for a lot of applications, but it is not the best tool 
 for every job of designing an interface.  SWIG is still preferable if you 
 have a large existing code base to wrap or if you want to support more target 
 languages than just Python.  I have a specific need for cross-language 
 polymorphism, and SWIG is much better at that than Cython is.  It all depends.
 
 Looks like somebody needs to update the c-info.python-as-glue.html page.
 
 -Bill
 
 On Oct 27, 2014, at 10:04 AM, Glen Mabey gma...@swri.org wrote:
 
 On Oct 27, 2014, at 10:45 AM, Sturla Molden sturla.mol...@gmail.com
 wrote:
 
 Glen Mabey gma...@swri.org wrote:
 
 I'd really like for this to be included alongside numpy.i -- but maybe I
 overestimate the number of numpy users who use complex data (let your
 voice be heard!) and who also end up using std::complex in C++ land.
 
 I don't think you do. But perhaps you overestimate the number of NumPy
 users who use Swig?
 
 Likely so.
 
 Cython seems to be the preferred wrapping tool today, and it understands
 complex numbers:
 
  cdef double complex J = 0.0 + 1j
 
 If you tell Cython to emit C++, this will result in code that uses
 std::complexdouble. 
 
 I chose swig after reviewing the options listed here, and I didn't see 
 cython on the list:
 
 http://docs.scipy.org/doc/numpy/user/c-info.python-as-glue.html
 
 I guess that's because cython is different language, right?  So, if I want 
 to interactively call C++ functions from say ipython, then is cython really 
 an option?
 
 Thanks for the feedback --
 Glen
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 ** Bill Spotz  **
 ** Sandia National Laboratories  Voice: (505)845-0170  **
 ** P.O. Box 5800 Fax:   (505)284-0154  **
 ** Albuquerque, NM 87185-0370Email: wfsp...@sandia.gov **
 
 
 
 
 

** Bill Spotz  **
** Sandia National Laboratories  Voice: (505)845-0170  **
** P.O. Box 5800 Fax:   (505)284-0154  **
** Albuquerque, NM 87185-0370Email: wfsp...@sandia.gov **





___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.i and std::complex

2014-10-27 Thread Sturla Molden
Glen Mabey gma...@swri.org wrote:

 I chose swig after reviewing the options listed here, and I didn't see cython 
 on the list:
 
 http://docs.scipy.org/doc/numpy/user/c-info.python-as-glue.html

It's because that list is old and has not been updated. It has the
predecessor to Cython, Pyrex, but they are very different now.

Both SciPy and NumPy has Cython as a build dependency, and also projects
like scikit-learn, scikit-image, statsmodels. 

If you find C++ projects which use Swig (wxPython, PyWin32) or SIP (PyQt)
it is mainly because they are older than Cython. A more recent addition,
PyZMQ, use Cython to wrap C++.


 I guess that's because cython is different language, right?  So, if I
 want to interactively call C++ functions from say ipython, then is cython 
 really an option?

You can use Cython to call C++ functions in ipython and ipython notebook.

cythonmagic takes care of that :-)


Sturla

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [EXTERNAL] Re: numpy.i and std::complex

2014-10-27 Thread Sturla Molden
Bill Spotz wfsp...@sandia.gov wrote:
 Oops, I meant 'Cython is its own language,' not Python (although
 Python qualifies, too, just not in context).
 
 Also, Pyrex, listed in the c-info.python-as-glue.html page, was the 
 pre-cursor to Cython.

But when it comes to interfacing NumPy, they are really not comparable. :-)

Sturla

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ODE how to?

2014-10-27 Thread Vincent Davis
It's been too long since I have done differential equations and I am not
sure the best tools to solve this problem.
I am starting with a basic kinematic equation for the balance of forces.
P\v - ((A*Cw*Rho*v^2)/2 + m*g*Crl + m*g*slope) =   m*a
P: power
x: position
v: velocity, x'
a: acceleration x
(A*Cw*Rho*v^2)/2 : air resistance
m*g*Crl : rolling resistance
m*g*slope : potential energy (elevation)

I am modifying the above equation so that air velocity and slope are
dependant on location x.
Vair = v + f(x)  where f(x) is the weather component and a function of
location x.
Same goes for slope, slope = g(x)

Power is a function I what to optimize/find to minimize time but at this
time just simulate. maybe something like:
P = 2500/(v+1)
I will have restriction on P but not interested in that now.
The course I what to simulate therefore defines slope and wind speed. and
is of a fixed distance.

I have played with some of the simple scipy.integrate.odeint examples. I
get that I need to define a system of equations but am not really sure the
rules for doing so. A little help would be greatly appreciated.


Vincent Davis
720-301-3003
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] multi-dimensional c++ proposal

2014-10-27 Thread Sturla Molden
On 27/10/14 13:14, Neal Becker wrote:
 The multi-dimensional c++ stuff is interesting (about time!)

 http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2014/n3851.pdf

OMG, that API is about as awful as it gets. Obviously it is written by 
two computer scientists who do not understand what scientific and 
technical computing actually needs. There is a reason why many 
scientists still prefer Fortran to C++, and I think this proposal shows 
us why. An API like that will never be suitable for implementing complex 
numerical alorithms. It will fail horribly because it is *not readable*. 
I have no doubt it will be accepted though, because the C++ standards 
committee tends to accept unusable things.

Sturla








___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.i and std::complex

2014-10-27 Thread MinRK

 It's because that list is old and has not been updated. It has the
 predecessor to Cython, Pyrex, but they are very different now.

 Both SciPy and NumPy has Cython as a build dependency, and also projects
 like scikit-learn, scikit-image, statsmodels.

 If you find C++ projects which use Swig (wxPython, PyWin32) or SIP (PyQt)
 it is mainly because they are older than Cython. A more recent addition,
 PyZMQ, use Cython to wrap C++.


Just to clarify that, while PyZMQ wraps a library written in C++, libzmq's
public API is C, not C++. PyZMQ does not use any of Cython's C++
functionality. I have done other similar (unfortunately not public)
projects that wrap actual C++ libraries with Cython, and I have been happy
with Cython's C++ support[1].

-MinRK

[1] At least as happy as I was with the wrapped C++ code, anyway.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Choosing between NumPy and SciPy functions

2014-10-27 Thread Eelco Hoogendoorn
The same occurred to me when reading that question. My personal opinion is
that such functionality should be deprecated from numpy. I don't know who
said this, but it really stuck with me: but the power of numpy is first and
foremost in it being a fantastic interface, not in being a library.

There is nothing more annoying than every project having its own array
type. The fact that the whole scientific python stack can so seamlessly
communicate is where all good things begin.

In my opinion, that is what numpy should focus on; basic data structures,
and tools for manipulating them. Linear algebra is way too high level for
numpy imo, and used by only a small subsets of its 'matlab-like' users.

When I get serious about linear algebra or ffts or what have you, id rather
import an extra module that wraps a specific library.

On Mon, Oct 27, 2014 at 2:26 PM, D. Michael McFarland dm...@dmmcf.net
wrote:

 A recent post raised a question about differences in results obtained
 with numpy.linalg.eigh() and scipy.linalg.eigh(), documented at

 http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html#numpy.linalg.eigh
 and

 http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigh.html#scipy.linalg.eigh
 ,
 respectively.  It is clear that these functions address different
 mathematical problems (among other things, the SciPy routine can solve
 the generalized as well as standard eigenproblems); I am not concerned
 here with numerical differences in the results for problems both should
 be able to solve (the author of the original post received useful
 replies in that thread).

 What I would like to ask about is the situation this illustrates, where
 both NumPy and SciPy provide similar functionality (sometimes identical,
 to judge by the documentation).  Is there some guidance on which is to
 be preferred?  I could argue that using only NumPy when possible avoids
 unnecessary dependence on SciPy in some code, or that using SciPy
 consistently makes for a single interface and so is less error prone.
 Is there a rule of thumb for cases where SciPy names shadow NumPy names?

 I've used Python for a long time, but have only recently returned to
 doing serious numerical work with it.  The tools are very much improved,
 but sometimes, like now, I feel I'm missing the obvious.  I would
 appreciate pointers to any relevant documentation, or just a summary of
 conventional wisdom on the topic.

 Regards,
 Michael
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.i and std::complex

2014-10-27 Thread Robert Kern
On Mon, Oct 27, 2014 at 4:27 PM, Sturla Molden sturla.mol...@gmail.com wrote:
 Glen Mabey gma...@swri.org wrote:

 I chose swig after reviewing the options listed here, and I didn't see 
 cython on the list:

 http://docs.scipy.org/doc/numpy/user/c-info.python-as-glue.html

 It's because that list is old and has not been updated. It has the
 predecessor to Cython, Pyrex, but they are very different now.

 Both SciPy and NumPy has Cython as a build dependency, and also projects
 like scikit-learn, scikit-image, statsmodels.

 If you find C++ projects which use Swig (wxPython, PyWin32) or SIP (PyQt)
 it is mainly because they are older than Cython. A more recent addition,
 PyZMQ, use Cython to wrap C++.

SWIG is a perfectly reasonable tool that is still used on new
projects, and is a supported way of building extensions against numpy.
Please stop haranguing the new guy for not knowing things that you
know. This thread is about extending that support, a perfectly fine
and decent thing to do.

-- 
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.i and std::complex

2014-10-27 Thread Sturla Molden
Robert Kern robert.k...@gmail.com wrote:

 Please stop haranguing the new guy for not knowing things that you
 know. 

I am not doing any of that. You are the only one haranguing here. I usually
ignore your frequent inpolite comments, but I will do an exception this
time and ask you to shut up.

Sturla

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Why ndarray provides four ways to flatten?

2014-10-27 Thread Alexander Belopolsky
Given an n-dim array x, I can do

1. x.flat
2. x.flatten()
3. x.ravel()
4. x.reshape(-1)

Each of these expressions returns a flat version of x with some
variations.  Why does NumPy implement four different ways to do essentially
the same thing?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Why ndarray provides four ways to flatten?

2014-10-27 Thread Yuxiang Wang
Hi Alexander,

In my opinion - because they don't do the same thing, especially when
you think in terms in lower-level.

ndarray.flat returns an iterator; ndarray.flatten() returns a copy;
ndarray.ravel() only makes copies when necessary; ndarray.reshape() is
more general purpose, even though you can use it to flatten arrays.

They are very distinct in behavior - for example, copies and views may
store in the memory very differently and you would have to pay
attention to the stride size if you are passing them down onto
C/Fortran code. (Correct me if I am wrong please)

-Shawn

On Mon, Oct 27, 2014 at 8:06 PM, Alexander Belopolsky ndar...@mac.com wrote:
 Given an n-dim array x, I can do

 1. x.flat
 2. x.flatten()
 3. x.ravel()
 4. x.reshape(-1)

 Each of these expressions returns a flat version of x with some variations.
 Why does NumPy implement four different ways to do essentially the same
 thing?


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Yuxiang Shawn Wang
Gerling Research Lab
University of Virginia
yw...@virginia.edu
+1 (434) 284-0836
https://sites.google.com/a/virginia.edu/yw5aj/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Choosing between NumPy and SciPy functions

2014-10-27 Thread josef.pktd
On Mon, Oct 27, 2014 at 2:24 PM, Eelco Hoogendoorn 
hoogendoorn.ee...@gmail.com wrote:

 The same occurred to me when reading that question. My personal opinion is
 that such functionality should be deprecated from numpy. I don't know who
 said this, but it really stuck with me: but the power of numpy is first and
 foremost in it being a fantastic interface, not in being a library.

 There is nothing more annoying than every project having its own array
 type. The fact that the whole scientific python stack can so seamlessly
 communicate is where all good things begin.

 In my opinion, that is what numpy should focus on; basic data structures,
 and tools for manipulating them. Linear algebra is way too high level for
 numpy imo, and used by only a small subsets of its 'matlab-like' users.

 When I get serious about linear algebra or ffts or what have you, id
 rather import an extra module that wraps a specific library.


We are not always getting serious about linalg, just a quick call to pinv
or qr or matrix_rank or similar doesn't necessarily mean we need a linalg
library with all advanced options.

@  matrix operations and linear algebra are basic stuff.




 On Mon, Oct 27, 2014 at 2:26 PM, D. Michael McFarland dm...@dmmcf.net
 wrote:

 A recent post raised a question about differences in results obtained
 with numpy.linalg.eigh() and scipy.linalg.eigh(), documented at

 http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html#numpy.linalg.eigh
 and

 http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigh.html#scipy.linalg.eigh
 ,
 respectively.  It is clear that these functions address different
 mathematical problems (among other things, the SciPy routine can solve
 the generalized as well as standard eigenproblems); I am not concerned
 here with numerical differences in the results for problems both should
 be able to solve (the author of the original post received useful
 replies in that thread).

 What I would like to ask about is the situation this illustrates, where
 both NumPy and SciPy provide similar functionality (sometimes identical,
 to judge by the documentation).  Is there some guidance on which is to
 be preferred?  I could argue that using only NumPy when possible avoids
 unnecessary dependence on SciPy in some code, or that using SciPy
 consistently makes for a single interface and so is less error prone.
 Is there a rule of thumb for cases where SciPy names shadow NumPy names?

 I've used Python for a long time, but have only recently returned to
 doing serious numerical work with it.  The tools are very much improved,
 but sometimes, like now, I feel I'm missing the obvious.  I would
 appreciate pointers to any relevant documentation, or just a summary of
 conventional wisdom on the topic.



Just as opinion as user:

Most of the time I don't care and treat this just as different versions.
For example in the linalg case, I use by default numpy.linalg and switch to
scipy if I need the extras.
pinv is the only one that I ever seriously compared.

Some details are nicer, np.linalg.qr(x, mode='r') returns the reduced
matrix instead of the full matrix as does scipy.linalg.
np.linalg.pinv is faster but maybe slightly less accurate (or defaults that
make it less accurate in corner cases). scipy often has more overhead (and
isfinite check by default).

I just checked, I didn't even know scipy.linalg also has an `inv`. One of
my arguments for np.linalg would have been that it's easy to switch between
inv and pinv.

For fft I use mostly scipy, IIRC.   (scipy's fft imports numpy's fft,
partially?)


Essentially, I don't care most of the time that there are different ways of
doing essentially the same thing, but some better information about the
differences would be useful.

Josef




 Regards,
 Michael
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Choosing between NumPy and SciPy functions

2014-10-27 Thread Sturla Molden
josef.p...@gmail.com wrote:

 For fft I use mostly scipy, IIRC.   (scipy's fft imports numpy's fft,
 partially?)

No. SciPy uses the Fortran library FFTPACK (wrapped with f2py) and NumPy
uses a smaller C library called fftpack_lite. Algorithmically they are are
similar, but fftpack_lite has fewer features (e.g. no DCT). scipy.fftpack
does not import numpy.fft. Neither of these libraries are very fast, but
usually they are fast enough for practical purposes. If we really need a
kick-ass fast FFT we need to go to libraries like FFTW, Intel MKL or
Apple's Accelerate Framework, or even use tools like CUDA or OpenCL to run
the FFT on the GPU. But using such tools takes more coding (and reading API
specifications) than the convinience of just using the FFTs already in
NumPy or SciPy. So if you count in your own time as well, it might not be
that FFTW or MKL are the faster FFTs.

Sturla

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Choosing between NumPy and SciPy functions

2014-10-27 Thread Sturla Molden
Sturla Molden sturla.mol...@gmail.com wrote:

 If we really need a
 kick-ass fast FFT we need to go to libraries like FFTW, Intel MKL or
 Apple's Accelerate Framework, 

I should perhaps also mention FFTS here, which claim to be faster than FFTW
and has a BSD licence:

http://anthonix.com/ffts/index.html

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Choosing between NumPy and SciPy functions

2014-10-27 Thread josef.pktd
On Mon, Oct 27, 2014 at 10:50 PM, Sturla Molden sturla.mol...@gmail.com
wrote:

 josef.p...@gmail.com wrote:

  For fft I use mostly scipy, IIRC.   (scipy's fft imports numpy's fft,
  partially?)

 No. SciPy uses the Fortran library FFTPACK (wrapped with f2py) and NumPy
 uses a smaller C library called fftpack_lite. Algorithmically they are are
 similar, but fftpack_lite has fewer features (e.g. no DCT). scipy.fftpack
 does not import numpy.fft. Neither of these libraries are very fast, but
 usually they are fast enough for practical purposes. If we really need a
 kick-ass fast FFT we need to go to libraries like FFTW, Intel MKL or
 Apple's Accelerate Framework, or even use tools like CUDA or OpenCL to run
 the FFT on the GPU. But using such tools takes more coding (and reading API
 specifications) than the convinience of just using the FFTs already in
 NumPy or SciPy. So if you count in your own time as well, it might not be
 that FFTW or MKL are the faster FFTs.



Ok, I didn't remember correctly.

I didn't use much fft recently, I never used DCT. My favorite fft
function is fftconvolve.
https://github.com/scipy/scipy/blob/e758c482efb8829685dcf494bdf71eeca3dd77f0/scipy/signal/signaltools.py#L13
   doesn't seem to mind mixing numpy and scipy  (quick github search)


It's sometimes useful to have simplified functions that are good enough
where we don't have to figure out all the extras that the docstring of the
fancy version is mentioning.

Josef




 Sturla

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Choosing between NumPy and SciPy functions

2014-10-27 Thread josef.pktd
On Mon, Oct 27, 2014 at 11:31 PM, josef.p...@gmail.com wrote:



 On Mon, Oct 27, 2014 at 10:50 PM, Sturla Molden sturla.mol...@gmail.com
 wrote:

 josef.p...@gmail.com wrote:

  For fft I use mostly scipy, IIRC.   (scipy's fft imports numpy's fft,
  partially?)

 No. SciPy uses the Fortran library FFTPACK (wrapped with f2py) and NumPy
 uses a smaller C library called fftpack_lite. Algorithmically they are are
 similar, but fftpack_lite has fewer features (e.g. no DCT). scipy.fftpack
 does not import numpy.fft. Neither of these libraries are very fast, but
 usually they are fast enough for practical purposes. If we really need a
 kick-ass fast FFT we need to go to libraries like FFTW, Intel MKL or
 Apple's Accelerate Framework, or even use tools like CUDA or OpenCL to run
 the FFT on the GPU. But using such tools takes more coding (and reading
 API
 specifications) than the convinience of just using the FFTs already in
 NumPy or SciPy. So if you count in your own time as well, it might not be
 that FFTW or MKL are the faster FFTs.



 Ok, I didn't remember correctly.

 I didn't use much fft recently, I never used DCT. My favorite fft
 function is fftconvolve.

 https://github.com/scipy/scipy/blob/e758c482efb8829685dcf494bdf71eeca3dd77f0/scipy/signal/signaltools.py#L13
doesn't seem to mind mixing numpy and scipy  (quick github search)


 It's sometimes useful to have simplified functions that are good enough
 where we don't have to figure out all the extras that the docstring of the
 fancy version is mentioning.


I take this back (even if it's true),
because IMO the defaults should work, and I have a tendency to pile on
options in my code that are intended for experts.

Josef





 Josef




 Sturla

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Choosing between NumPy and SciPy functions

2014-10-27 Thread Matthew Brett
Hi,

On Mon, Oct 27, 2014 at 8:07 PM, Sturla Molden sturla.mol...@gmail.com wrote:
 Sturla Molden sturla.mol...@gmail.com wrote:

 If we really need a
 kick-ass fast FFT we need to go to libraries like FFTW, Intel MKL or
 Apple's Accelerate Framework,

 I should perhaps also mention FFTS here, which claim to be faster than FFTW
 and has a BSD licence:

 http://anthonix.com/ffts/index.html

Nice.  And a funny New Zealand name too.

Is this an option for us?  Aren't we a little behind the performance
curve on FFT after we lost FFTW?

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] FFTS for numpy's FFTs (was: Re: Choosing between NumPy and SciPy functions)

2014-10-27 Thread Nathaniel Smith
On 28 Oct 2014 04:07, Matthew Brett matthew.br...@gmail.com wrote:

 Hi,

 On Mon, Oct 27, 2014 at 8:07 PM, Sturla Molden sturla.mol...@gmail.com
wrote:
  Sturla Molden sturla.mol...@gmail.com wrote:
 
  If we really need a
  kick-ass fast FFT we need to go to libraries like FFTW, Intel MKL or
  Apple's Accelerate Framework,
 
  I should perhaps also mention FFTS here, which claim to be faster than
FFTW
  and has a BSD licence:
 
  http://anthonix.com/ffts/index.html

 Nice.  And a funny New Zealand name too.

 Is this an option for us?  Aren't we a little behind the performance
 curve on FFT after we lost FFTW?

It's definitely attractive. Some potential issues that might need dealing
with, based on a quick skim:

- seems to have a hard requirement for a processor supporting SSE, AVX, or
NEON. No fallback for old CPUs or other architectures. (I'm not even sure
whether it has x86-32 support.)

-  no runtime CPU detection, e.g. SSE vs AVX appears to be a compile time
decision

- not sure if it can handle non-power-of-two problems at all, or at all
efficiently. (FFTPACK isn't great here either but major regressions would
be bad.)

- not sure if it supports all the modes we care about (e.g. rfft)

This stuff is all probably solveable though, so if someone has a hankering
to make numpy (or scipy) fft dramatically faster then you should get in
touch with the author and see what they think.

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Choosing between NumPy and SciPy functions

2014-10-27 Thread Sturla Molden
josef.p...@gmail.com wrote:

ahref=https://github.com/scipy/scipy/blob/e758c482efb8829685dcf494bdf71eeca3dd77f0/scipy/signal/signaltools.py#L13;https://github.com/scipy/scipy/blob/e758c482efb8829685dcf494bdf71eeca3dd77f0/scipy/signal/signaltools.py#L13/a
doesn't seem to mind mixing numpy and scipy  (quick github search)

I believe it is because NumPy's FFTs (beginning with 1.9.0) are
thread-safe. But FFTs from numpy.fft and scipy.fftpack should be rather
similar in performance. (Except if you use Enthought, in which case the
former is much faster.)

It seems from the code that fftconvolve does not use overlap-add or
overlap-save. I seem to remember that it did before, but I might be wrong.
Personally I prefer to use overlap-add instead of a very long FFT. 

There is also a scipy.fftpack.convolve module. I have not used it though.


Sturla

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Choosing between NumPy and SciPy functions

2014-10-27 Thread Sturla Molden
Matthew Brett matthew.br...@gmail.com wrote:

 Is this an option for us?  Aren't we a little behind the performance
 curve on FFT after we lost FFTW?

It does not run on Windows because it uses POSIX to allocate executable
memory for tasklets, as i understand it.

By the way, why did we loose FFTW, apart from GPL? One thing to mention
here is that MKL supports the FFTW APIs. If we can use MKL for linalg and
numpy.dot I don't see why we cannot use it for FFT.

On Mac there is also vDSP in Accelerate framework which has an insanely
fast FFT (also claimed to be faster than FFTW). Since it is a system
library there should be no license problems.

There are clearly options if someone wants to work on it and maintain it.

Sturla

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion