Re: [Numpy-discussion] Cython-based OpenMP-accelerated quartic polynomial solver

2015-09-30 Thread Matthieu Brucher
After, I agree with you.

2015-09-30 18:14 GMT+01:00 Robert Kern <robert.k...@gmail.com>:
> On Wed, Sep 30, 2015 at 10:35 AM, Matthieu Brucher
> <matthieu.bruc...@gmail.com> wrote:
>>
>> Yes, obviously, the code has NR parts, so it can't be licensed as BSD
>> as it is...
>
> It's not obvious to me, especially after Juha's further clarifications.
>
> --
> Robert Kern
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>



-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Cython-based OpenMP-accelerated quartic polynomial solver

2015-09-30 Thread Matthieu Brucher
Yes, obviously, the code has NR parts, so it can't be licensed as BSD
as it is...

Matthieu

2015-09-30 2:37 GMT+01:00 Charles R Harris :
>
>
> On Tue, Sep 29, 2015 at 6:48 PM, Chris Barker - NOAA Federal
>  wrote:
>>
>> This sounds pretty cool -- and I've had a use case. So it would be
>> nice to get into Numpy.
>>
>> But: I doubt we'd want OpenMP dependence in Numpy itself.
>>
>> But maybe a pure Cython non-MP version?
>>
>> Are we putting Cuthon in Numpy now? I lost track.
>
>
> Yes, but we need to be careful of Numeric Recipes.
>
> 
>
> Chuck
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>



-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Mathematical functions in Numpy

2015-03-17 Thread Matthieu Brucher
Hi,

These functions are defined in the C standard library!

Cheers,

Matthieu

2015-03-17 18:00 GMT+00:00 Shubhankar Mohapatra mshubhan...@yahoo.co.in:
 Hello all,
 I am a undergraduate and i am trying to do a project this time on numppy in
 gsoc. This project is about integrating vector math library classes of sleef
 and yeppp into numpy to make the mathematical functions faster. I have
 already studied the new library classes but i am unable to find the sin ,
 cos function definitions in the numpy souce code.Can someone please help me
 find the functions in the source code so that i can implement the new
 library class into numpy.
 Thanking you,
 Shubhankar Mohapatra


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] The BLAS problem (was: Re: Wiki page for building numerical stuff on Windows)

2014-05-12 Thread Matthieu Brucher
Yes, they seem to be focused on HPC clusters with sometimes old rules
(as no shared library).
Also, they don't use a potable Makefile generator, not even autoconf,
this may also play a role in Windows support.


2014-05-12 12:52 GMT+01:00 Olivier Grisel olivier.gri...@ensta.org:
 BLIS looks interesting. Besides threading and runtime configuration,
 adding support for building it as a shared library would also be
 required to be usable by python packages that have several extension
 modules that link against a BLAS implementation.

 https://code.google.com/p/blis/wiki/FAQ#Can_I_build_BLIS_as_a_shared_library?

 
 Can I build BLIS as a shared library?

 The BLIS build system is not yet capable of outputting a shared
 library. Building and using shared libraries requires careful
 attention to various linkage and runtime details that, quite frankly,
 the BLIS developers would rather avoid if possible. If this feature is
 important to you, please speak up on the blis-devel mailing list.
 

 Also Windows support is still considered experimental according to the same 
 FAQ.

 --
 Olivier
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] The BLAS problem (was: Re: Wiki page for building numerical stuff on Windows)

2014-05-12 Thread Matthieu Brucher
There is the issue of installing the shared library at the proper
location as well IIRC?

2014-05-12 13:54 GMT+01:00 Carl Kleffner cmkleff...@gmail.com:
 Neither the numpy ATLAS build nor the MKL build on Windows makes use of
 shared libs. The latter due due licence restriction.

 Carl


 2014-05-12 14:23 GMT+02:00 Matthieu Brucher matthieu.bruc...@gmail.com:

 Yes, they seem to be focused on HPC clusters with sometimes old rules
 (as no shared library).
 Also, they don't use a potable Makefile generator, not even autoconf,
 this may also play a role in Windows support.


 2014-05-12 12:52 GMT+01:00 Olivier Grisel olivier.gri...@ensta.org:
  BLIS looks interesting. Besides threading and runtime configuration,
  adding support for building it as a shared library would also be
  required to be usable by python packages that have several extension
  modules that link against a BLAS implementation.
 
 
  https://code.google.com/p/blis/wiki/FAQ#Can_I_build_BLIS_as_a_shared_library?
 
  
  Can I build BLIS as a shared library?
 
  The BLIS build system is not yet capable of outputting a shared
  library. Building and using shared libraries requires careful
  attention to various linkage and runtime details that, quite frankly,
  the BLIS developers would rather avoid if possible. If this feature is
  important to you, please speak up on the blis-devel mailing list.
  
 
  Also Windows support is still considered experimental according to the
  same FAQ.
 
  --
  Olivier
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion



 --
 Information System Engineer, Ph.D.
 Blog: http://matt.eifelle.com
 LinkedIn: http://www.linkedin.com/in/matthieubrucher
 Music band: http://liliejay.com/
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Hdf-forum] ANN: HDF5 for Python 2.3.0

2014-04-22 Thread Matthieu Brucher
Good work!
Small question : do you now have the interface to set alignment?

Cheers,

Matthieu

2014-04-22 14:25 GMT+01:00 Andrew Collette andrew.colle...@gmail.com:
 Announcing HDF5 for Python (h5py) 2.3.0
 ===

 The h5py team is happy to announce the availability of h5py 2.3.0 (final).
 Thanks to everyone who provided beta feedback!

 What's h5py?
 

 The h5py package is a Pythonic interface to the HDF5 binary data format.

 It lets you store huge amounts of numerical data, and easily manipulate
 that data from NumPy. For example, you can slice into multi-terabyte
 datasets stored on disk, as if they were real NumPy arrays. Thousands of
 datasets can be stored in a single file, categorized and tagged however
 you want.

 Changes
 ---

 This release introduces some important new features, including:

 * Support for arbitrary vlen data
 * Improved exception messages
 * Improved setuptools support
 * Multiple additions to the low-level API
 * Improved support for MPI features
 * Single-step build for HDF5 on Windows

 Major fixes since beta:

 * LZF compression crash on Win64
 * Unhelpful error message relating to chunked storage
 * Import error for IPython completer on certain platforms

 A complete description of changes is available online:

 http://docs.h5py.org/en/latest/whatsnew/2.3.html

 Where to get it
 ---

 Downloads, documentation, and more are available at the h5py website:

 http://www.h5py.org

 Acknowledgements
 

 The h5py package relies on third-party testing and contributions.  For the
 2.3 release, thanks especially to:

 * Martin Teichmann
 * Florian Rathgerber
 * Pierre de Buyl
 * Thomas Caswell
 * Andy Salnikov
 * Darren Dale
 * Robert David Grant
 * Toon Verstraelen
 * Many others who contributed bug reports and testing

 ___
 Hdf-forum is for HDF software users discussion.
 hdf-fo...@lists.hdfgroup.org
 http://mail.lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org



-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Hdf-forum] ANN: HDF5 for Python 2.3.0

2014-04-22 Thread Matthieu Brucher
OK, I may end up doing it, as it can be quite interesting!

Cheers,

Matthieu

2014-04-22 15:45 GMT+01:00 Andrew Collette andrew.colle...@gmail.com:
 Hi,

 Good work!
 Small question : do you now have the interface to set alignment?

 Unfortunately this didn't make it in to 2.3.  Pull requests are
 welcome for this and other MPI features!

 Andrew
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Suggestion: Port Theano RNG implementation to NumPy

2014-02-18 Thread Matthieu Brucher
I won't dive into the discussion as well, except to say that parallel
RNGs have to have specific characteristics, mainly to initialize many
RNGs at the same time. I don't know how MRG31k3p handles this, as the
publications was not very clear on this aspect. I guess it falls down
as the other from that time
(http://dl.acm.org/citation.cfm?id=1276928).
BTW, Random123 works also on GPU and can use intrinsics to be also
faster than usual congruent RNGs (see
http://www.thesalmons.org/john/random123/papers/random123sc11.pdf
table 2).

Matthieu

2014-02-18 16:00 GMT+00:00 Frédéric Bastien no...@nouiz.org:
 I won't go in the discussion of which RNG is better for some problems.
 I'll just tell why we pick this one.

 We needed a parallel RNG and we wanted to use the same RNG on CPU and
 on GPU. We discussed with a professor in our department that is well
 know in that field(Pierre L'Ecuyer) and he recommanded this one for
 our problem. For the GPU, we don't want an rng that have too much
 register too.

 Robert K. commented that this would need refactoring of numpy.random
 and then it would be easy to have many rng.

 Fred

 On Tue, Feb 18, 2014 at 10:56 AM, Matthieu Brucher
 matthieu.bruc...@gmail.com wrote:
 Hi,

 The main issue with PRNG and MT is that you don't know how to
 initialize all MT generators properly. A hash-based PRNG is much more
 efficient in that regard (see Random123 for a more detailed
 explanation).
 From what I heard, if MT is indeed chosen for RNG in numerical world,
 in parallel world, it is not as obvious because of this pitfall.

 Cheers,

 Matthieu

 2014-02-18 15:50 GMT+00:00 Sturla Molden sturla.mol...@gmail.com:
 AFAIK, CMRG (MRG31k3p) is more equidistributed than Mersenne Twister, but
 the period is much shorter. However, MT is getting acceptance as the PRNG
 of choice for numerical work. And when we are doing stochastic simulations
 in Python, the speed of the PRNG is unlikely to be the bottleneck.

 Sturla


 Frédéric Bastien no...@nouiz.org wrote:
 Hi,

 In a ticket I did a coment and Charles suggested that I post it here:

 In Theano we have an C implementation of a faster RNG: MRG31k3p. It is
 faster on CPU, and we have a GPU implementation. It would be
 relatively easy to parallize on the CPU with OpenMP.

 If someone is interested to port this to numpy, their wouldn't be any
 dependency problem. No license problem as Theano license have the same
 license as NumPy.

 The speed difference is significant, but I don't recall numbers.

 Fred

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 --
 Information System Engineer, Ph.D.
 Blog: http://matt.eifelle.com
 LinkedIn: http://www.linkedin.com/in/matthieubrucher
 Music band: http://liliejay.com/
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] deprecate numpy.matrix

2014-02-10 Thread Matthieu Brucher
Yes, but these will be scipy.sparse matrices, nothing to do with numpy
(dense) matrices.

Cheers,

Matthieu

2014-02-10 Dinesh Vadhia dineshbvad...@hotmail.com:
 Scipy sparse uses matrices - I was under the impression that scipy sparse
 only works with matrices or have things moved on?



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] MKL and OpenBLAS

2014-02-06 Thread Matthieu Brucher
According to the discussions on the ML, they switched from GPL to MPL
to enable the kind of distribution numpy/scipy is looking for. They
had some hesitations between BSD and MPL, but IIRC their official
stand is to allow inclusion inside BSD-licensed code.

Cheers,

Matthieu

2014-02-06 20:09 GMT+00:00 Charles R Harris charlesr.har...@gmail.com:



 On Thu, Feb 6, 2014 at 5:27 AM, Julian Taylor
 jtaylor.deb...@googlemail.com wrote:


 On Thu, Feb 6, 2014 at 1:11 PM, Thomas Unterthiner
 thomas_unterthi...@web.de wrote:

 On 2014-02-06 11:10, Sturla Molden wrote:
  BTW: The performance of OpenBLAS is far behind Eigen, MKL and ACML, but
  better than ATLAS and Accelerate.
 Hi there!

 Sorry for going a bit off-topic, but:  do you have any links to the
 benchmarks?  I googled around, but I haven't found anything. FWIW, on my
 own machines OpenBLAS is on par with MKL (on an i5 laptop and an older
 Xeon server) and actually slightly faster than ACML (on an FX8150) for
 my use cases (I mainly tested DGEMM/SGEMM, and a few LAPACK calls). So
 your claim is very surprising for me.

 Also, I'd be highly surprised if OpenBLAS would be slower than Eigen,
 given than the developers themselves say that Eigen is nearly as fast
 as GotoBLAS[1], and that OpenBLAS was originally forked from GotoBLAS.


 I'm also a little sceptical about the benchmarks, e.g. according to the
 FAQ eigen does not seem to support AVX which is relatively important for
 blas level 3 performance.
 The lazy evaluation is probably eigens main selling point, which is
 something we cannot make use of in numpy currently.

 But nevertheless eigen could be an interesting alternative for our binary
 releases on windows. Having the stuff as headers makes it probably easier to
 build than ATLAS we are currently using.


 The Eigen license is MPL-2. That doesn't look to be incompatible with BSD,
 but it may complicate things.

 Q8: I want to distribute (outside my organization) executable programs or
 libraries that I have compiled from someone else's unchanged MPL-licensed
 source code, either standalone or part of a larger work. What do I have to
 do?

 You must inform the recipients where they can get the source for the
 executable program you are distributing (i.e., you must comply with Section
 3.2). You may also distribute any executables you create under a license of
 your choosing, as long as that license does not interfere with the
 recipients' rights to the source under the terms of the MPL.


 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] runtime warning for where

2013-11-16 Thread Matthieu Brucher
Hi,

Don't forget that np.where is not smart. First np.sin(x)/x is computed
for the array, which is why you see the warning, and then np.where
selects the proper final results.

Cheers,

Matthieu

2013/11/16 David Pine djp...@gmail.com:
 The program at the bottom of this message returns the following runtime 
 warning:

 python test.py
 test.py:5: RuntimeWarning: invalid value encountered in divide
  return np.where(x==0., 1., np.sin(x)/x)

 The function works correctly returning
 x = np.array([  0.,   1.,   2.,   3.,   4.,   5.,   6.,   7.,   8.,   9.,  
 10.])
 y = np.array([ 1.,  0.84147098,  0.45464871,  0.04704   , -0.18920062,
   -0.19178485, -0.04656925,  0.09385523,  0.12366978,  0.04579094,
   -0.05440211])

 The runtime warning suggests that np.where evaluates np.sin(x)/x at all x, 
 including x=0, even though the np.where function returns the correct value of 
 1. when x is 0.  This seems odd to me.  Why issue a runtime warning? Nothing 
 is wrong.  Moreover, I don't recall numpy issuing such warnings in earlier 
 versions.

 import numpy as np
 import matplotlib.pyplot as plt

 def sinc(x):
return np.where(x==0., 1., np.sin(x)/x)

 x = np.linspace(0., 10., 11)
 y = sinc(x)

 plt.plot(x, y)
 plt.show()
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] np.dot and 'out' bug

2013-05-23 Thread Matthieu Brucher
Hi,

It's to be expected. You are overwritten one of your input vector while it
is still being used.
So not a numpy bug ;)

Matthieu


2013/5/23 Pierre Haessig pierre.haes...@crans.org

 Hi Nicolas,

 Le 23/05/2013 15:45, Nicolas Rougier a écrit :
  if I use either a or b as output, results are wrong (and nothing in the
 dot documentation prevents me from doing this):
 
  a = np.array([[1, 2], [3, 4]])
  b = np.array([[1, 2], [3, 4]])
  np.dot(a,b,out=a)
 
  - array([[ 6, 20],
[15, 46]])
 
 
  Can anyone confirm this behavior ? (tested using numpy 1.7.1)
 I just reproduced the same weird results with numpy 1.6.2

 best,
 Pierre


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] np.dot and 'out' bug

2013-05-23 Thread Matthieu Brucher
In my point of view, you should never use an output argument equal to an
input argument. It can impede a lot of optimizations.

Matthieu


2013/5/23 Nicolas Rougier nicolas.roug...@inria.fr


 
  Sure, that's clearly what's going on, but numpy shouldn't let you
  silently shoot yourself in the foot like that. Re-using input as
  output is a very common operation, and usually supported fine.
  Probably we should silently make a copy of any input(s) that overlap
  with the output? For high-dimensional dot, buffering temprary
  subspaces would still be more memory efficient than anything users
  could reasonably accomplish by hand.



 Also, from a user point of view it is difficult to sort out which
 functions currently allow 'out=a' or  out=b' since nothing in the 'dot'
 documentation warned me about such problem.


 Nicolas



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what do I get if I build with MKL?

2013-04-19 Thread Matthieu Brucher
Hi,

I think you have at least linear algebra (lapack) and dot. Basic
arithmetics will not benefit, for expm, logm... I don't know.

Matthieu


2013/4/19 Neal Becker ndbeck...@gmail.com

 What sorts of functions take advantage of MKL?

 Linear Algebra (equation solving)?

 Something like dot product?

 exp, log, trig of matrix?

 basic numpy arithmetic? (add matrixes)

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what do I get if I build with MKL?

2013-04-19 Thread Matthieu Brucher
For the matrix multiplication or array dot, you use BLAS3 functions as they
are more or less the same. For the rest, nothing inside Numpy uses BLAS or
LAPACK explicitelly IIRC. You have to do the calls yourself.


2013/4/19 Neal Becker ndbeck...@gmail.com

 KACVINSKY Tom wrote:

  You also get highly optimized BLAS routines, like dgemm and degemv.

 And does numpy/scipy just then automatically use them?  When I do a matrix
 multiply, for example?

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what do I get if I build with MKL?

2013-04-19 Thread Matthieu Brucher
The graph is a comparison of the dot calls, of course they are better with
MKL than the default BLAS version ;)
For the rest, Numpy doesn't benefit from MKL, scipy may if they call LAPACK
functions wrapped by Numpy or Scipy (I don't remember which does the
wrapping).

Matthieu


2013/4/19 KACVINSKY Tom tom.kacvin...@3ds.com

  Looks like the *lapack_lite files have internal calls to dgemm.  I alos
 found this:



 http://software.intel.com/en-us/articles/numpyscipy-with-intel-mkl



 So it looks like numpy/scipy performs better with MKL, regardless of how
 the MKL routines are called (directly, or via a numpy/scipy interface).



 Tom



 *From:* numpy-discussion-boun...@scipy.org [mailto:
 numpy-discussion-boun...@scipy.org] *On Behalf Of *Matthieu Brucher
 *Sent:* Friday, April 19, 2013 9:50 AM

 *To:* Discussion of Numerical Python
 *Subject:* Re: [Numpy-discussion] what do I get if I build with MKL?



 For the matrix multiplication or array dot, you use BLAS3 functions as
 they are more or less the same. For the rest, nothing inside Numpy uses
 BLAS or LAPACK explicitelly IIRC. You have to do the calls yourself.



 2013/4/19 Neal Becker ndbeck...@gmail.com

 KACVINSKY Tom wrote:

  You also get highly optimized BLAS routines, like dgemm and degemv.

 And does numpy/scipy just then automatically use them?  When I do a matrix
 multiply, for example?


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion





 --
 Information System Engineer, Ph.D.
 Blog: http://matt.eifelle.com
 LinkedIn: http://www.linkedin.com/in/matthieubrucher
 Music band: http://liliejay.com/

 This email and any attachments are intended solely for the use of the
 individual or entity to whom it is addressed and may be confidential and/or
 privileged.

 If you are not one of the named recipients or have received this email in
 error,

 (i) you should not read, disclose, or copy it,

 (ii) please notify sender of your receipt by reply email and delete this
 email and all attachments,

 (iii) Dassault Systemes does not accept or assume any liability or
 responsibility for any use of or reliance on this email.

  For other languages, go to http://www.3ds.com/terms/email-disclaimer

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OpenOpt Suite release 0.45

2013-03-16 Thread Matthieu Brucher
Hi,

Different objects can have the same hash, so it compares to find the actual
correct object.
Usually when you store something in a dict and later you can't find it
anymore, it is that the internal state changed and that the hash is not the
same anymore.

Matthieu


2013/3/16 Dmitrey tm...@ukr.net



 --- Исходное сообщение ---
 От кого: Alan G Isaac alan.is...@gmail.com
 Дата: 15 марта 2013, 22:54:21

 On 3/15/2013 3:34 PM, Dmitrey wrote:
  the suspected bugs are not documented yet


 I'm going to guess that the state of the F_i changes
 when you use them as keys (i.e., when you call __le__.

 no, their state doesn't change for operations like __le__ . AFAIK
 searching Python dict doesn't calls __le__ on the object keys at all, it
 operates with method .__hash__(), and latter returns fixed integer numbers
 assigned to the objects earlier (at least in my case).


  It is very hard to imagine that this is a Python or NumPy bug.

 Cheers,
 Alan

 ___
 NumPy-Discussion mailing 
 listNumPy-Discussion@scipy.orghttp://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OpenOpt Suite release 0.45

2013-03-16 Thread Matthieu Brucher
Even if they have different hashes, they can be stored in the same
underlying list before they are retrieved. Then, an actual comparison is
done to check if the given key (i.e. object instance, not hash) is the same
as one of the stored keys.


2013/3/16 Dmitrey tm...@ukr.net



 --- Исходное сообщение ---
 От кого: Matthieu Brucher matthieu.bruc...@gmail.com
 Дата: 16 марта 2013, 11:33:39

 Hi,

 Different objects can have the same hash, so it compares to find the
 actual correct object.
 Usually when you store something in a dict and later you can't find it
 anymore, it is that the internal state changed and that the hash is not the
 same anymore.


 my objects (oofuns) definitely have different __hash__() results - it's
 just integers 1,2,3 etc assigned to the oofuns (stored in oofun._id field)
 when they are created.


 D.



 Matthieu


 2013/3/16 Dmitrey tm...@ukr.net



 --- Исходное сообщение ---
 От кого: Alan G Isaac alan.is...@gmail.com
 Дата: 15 марта 2013, 22:54:21

 On 3/15/2013 3:34 PM, Dmitrey wrote:
  the suspected bugs are not documented yet


 I'm going to guess that the state of the F_i changes
 when you use them as keys (i.e., when you call __le__.

 no, their state doesn't change for operations like __le__ . AFAIK
 searching Python dict doesn't calls __le__ on the object keys at all, it
 operates with method .__hash__(), and latter returns fixed integer numbers
 assigned to the objects earlier (at least in my case).


  It is very hard to imagine that this is a Python or NumPy bug.

 Cheers,
 Alan

 ___
 NumPy-Discussion mailing 
 listNumPy-Discussion@scipy.orghttp://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




 --
 Information System Engineer, Ph.D.
 Blog: http://matt.eifelle.com
 LinkedIn: http://www.linkedin.com/in/matthieubrucher
 Music band: http://liliejay.com/

 ___
 NumPy-Discussion mailing 
 listNumPy-Discussion@scipy.orghttp://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Casting Bug or a Feature?

2013-01-16 Thread Matthieu Brucher
Hi,

Actually, this behavior is already present in other languages, so I'm -1 on
additional verbosity.
Of course a += b is not the same as a = a + b. The first one modifies the
object a, the second one creates a new object and puts it inside a. The
behavior IS consistent.

Cheers,

Matthieu


2013/1/17 Paul Anton Letnes paul.anton.let...@gmail.com

 On 17.01.2013 04:43, Patrick Marsh wrote:
  Thanks, everyone for chiming in.  Now that I know this behavior
  exists, I can explicitly prevent it in my code. However, it would be
  nice if a warning or something was generated to alert users about the
  inconsistency between var += ... and var = var + ...
 
 
  Patrick
 

 I agree wholeheartedly. I actually, for a long time, used to believe
 that python would translate
 a += b
 to
 a = a + b
 and was bitten several times by this bug. A warning (which can be
 silenced if you desperately want to) would be really nice, imho.

 Keep up the good work,
 Paul
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] www.numpy.org home page

2012-12-16 Thread Matthieu Brucher
 Does anyone have an informed opinion on the quality of these books:

 NumPy 1.5 Beginner's Guide, Ivan Idris,
 http://www.packtpub.com/numpy-1-5-using-real-world-examples-beginners-guide/book

 NumPy Cookbook, Ivan Idris,
 http://www.packtpub.com/numpy-for-python-cookbook/book


Packt is looking for reviewers for this (new) book. I will do one in the
next few weeks.

Cheers,


-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Scipy dot

2012-11-09 Thread Matthieu Brucher
Oh, about the differences. If there is something like cache blocking inside
ATLAS (which would make sense), the MAD are not in exactly the same order
and this would lead to some differences. You need to compare the MSE/sum of
values squared to the machine precision.

Cheers,


2012/11/9 Matthieu Brucher matthieu.bruc...@gmail.com

 Hi,

 A.A slower than A.A' is not a surprise for me. The latter is far more
 cache friendly that the former. Everything follows cache lines, so it is
 faster than something that will use one element from each cache line. In
 fact it is exactly what proves that the new version is correct.
 Good job (if all the tests were made and still pass ;) )

 Cheers,

 Matthieu


 2012/11/9 Nicolas SCHEFFER scheffer.nico...@gmail.com

 Ok: comparing apples to apples. I'm clueless on my observations and
 would need input from you guys.

 Using ATLAS 3.10, numpy with and without my changes, I'm getting these
 timings and comparisons.

 #
 #I. Generate matrices using regular dot:
 #
 big = np.array(np.random.randn(2000, 2000), 'f');
 np.savez('out', big=big, none=big.dot(big), both=big.T.dot(big.T),
 left=big.T.dot(big), right=big.dot(big.T))

 #
 #II. Timings with regular dot
 #
 In [3]: %timeit np.dot(big, big)
 10 loops, best of 3: 138 ms per loop

 In [4]: %timeit np.dot(big, big.T)
 10 loops, best of 3: 166 ms per loop

 In [5]: %timeit np.dot(big.T, big.T)
 10 loops, best of 3: 193 ms per loop

 In [6]: %timeit np.dot(big.T, big)
 10 loops, best of 3: 165 ms per loop

 #
 #III. I load these arrays and time again with the fast dot
 #
 In [21]: %timeit np.dot(big, big)
 10 loops, best of 3: 138 ms per loop

 In [22]: %timeit np.dot(big.T, big)
 10 loops, best of 3: 104 ms per loop

 In [23]: %timeit np.dot(big.T, big.T)
 10 loops, best of 3: 138 ms per loop

 In [24]: %timeit np.dot(big, big.T)
 10 loops, best of 3: 102 ms per loop

 1. A'.A': great!
 2. A.A' becomes faster than A.A !?!

 #
 #IV. MSE on differences
 #
 In [25]: np.sqrt(((arr['none'] - none)**2).sum())
 Out[25]: 0.0

 In [26]: np.sqrt(((arr['both'] - both)**2).sum())
 Out[26]: 0.0

 In [27]: np.sqrt(((arr['left'] - left)**2).sum())
 Out[27]: 0.015900515

 In [28]: np.sqrt(((arr['right'] - right)**2).sum())
 Out[28]: 0.015331409

 #
 # CCl
 #
 While the MSE are small, I'm wondering whether:
 - It's a bug: it should be exactly the same
 - It's a feature: BLAS is taking shortcuts when you have A.A'. The
 difference is not significant. Quick: PR that asap!

 I don't have enough expertise to answer that...

 Thanks much!

 -nicolas
 On Fri, Nov 9, 2012 at 2:13 PM, Nicolas SCHEFFER
 scheffer.nico...@gmail.com wrote:
  I too encourage users to use scipy.linalg for speed and robustness
  (hence calling this scipy.dot), but it just brings so much confusion!
  When using the scipy + numpy ecosystem, you'd almost want everything
  be done with scipy so that you get the best implementation in all
  cases: scipy.zeros(), scipy.array(), scipy.dot(), scipy.linalg.inv().
 
  Anyway this is indeed for another thread, the confusion we'd like to
  fix here is that users shouldn't have to understand the C/F contiguous
  concepts to get the maximum speed for np.dot()
 
  To summarize:
  - The python snippet I posted is still valid and can speed up your
  code if you can change all your dot() calls.
  - The change in dotblas.c is a bit more problematic because it's very
  core. I'm having issues right now to replicate the timings, I've got
  better timing for a.dot(a.T) than for a.dot(a). There might be a bug.
 
  It's a pain to test because I cannot do the test in a single python
 session.
  I'm going to try to integrate most of your suggestions, I cannot
  guarantee I'll have time to do them all though.
 
  -nicolas
  On Fri, Nov 9, 2012 at 8:56 AM, Nathaniel Smith n...@pobox.com wrote:
  On Fri, Nov 9, 2012 at 4:25 PM, Gael Varoquaux
  gael.varoqu...@normalesup.org wrote:
  On Fri, Nov 09, 2012 at 03:12:42PM +, Nathaniel Smith wrote:
  But what if someone compiles numpy against an optimized blas (mkl,
  say) and then compiles SciPy against the reference blas? What do you
  do then!? ;-)
 
  This could happen. But the converse happens very often. What happens
 is
  that users (eg on shared computing resource) ask for a scientific
 python
  environment. The administrator than installs the package starting from
  the most basic one, to the most advanced one, thus starting with numpy
  that can very well build without any external blas. When he gets to
 scipy
  he hits the problem that the build system does not detect properly the
  blas, and he solves that problem.
 
  Also, it used to be that on the major linux distributions, numpy
 would not
  be build with an optimize lapack because numpy was in the 'base' set
 of
  packages, but not lapack. On the contrary, scipy being in the
 'contrib'
  set, it could depend on lapack. I just checked, and this has been
 fixed
  in the major distributions (Fedora, Debian, Ubuntu).
 
  Now we can discuss

Re: [Numpy-discussion] Problems when using ACML with numpy

2012-05-12 Thread Matthieu Brucher
Does ACML now provide a CBLAS interface?

Matthieu

2012/5/12 Thomas Unterthiner thomas_unterthi...@web.de


 On 05/12/2012 03:27 PM, numpy-discussion-requ...@scipy.org wrote:
  12.05.2012 00:54, Thomas Unterthiner kirjoitti:
  [clip]
The process will have 100% CPU usage and will not show any activity
under strace. A gdb backtrace looks as follows:
  
(gdb) bt
#0  0x7fdcc000e524 in ?? ()
 from /usr/lib/python2.7/dist-packages/numpy/core/multiarray.so
  [clip]
 
  The backtrace looks like it does not use ACML. Does
 
from numpy.core._dotblas import dot
 
  work?
 

 Thanks for having a look at this.  The following was tried with the
 numpy that comes from the Ubuntu repo and symlinked ACML:


 $ python
 Python 2.7.3 (default, Apr 20 2012, 22:39:59)
 [GCC 4.6.3] on linux2
 Type help, copyright, credits or license for more information.
   from numpy.core._dotblas import dot
 Traceback (most recent call last):
   File stdin, line 1, in module
 ImportError: /usr/lib/python2.7/dist-packages/numpy/core/_dotblas.so:
 undefined symbol: cblas_cdotc_sub
  


 Following up:

 $ ldd /usr/lib/python2.7/dist-packages/numpy/core/_dotblas.so
 linux-vdso.so.1 =  (0x7fff3de0)
 libblas.so.3gf = /usr/lib/libblas.so.3gf (0x7f10965f8000)
 libc.so.6 = /lib/x86_64-linux-gnu/libc.so.6 (0x7f1096238000)
 librt.so.1 = /lib/x86_64-linux-gnu/librt.so.1 (0x7f109603)
 libgfortran.so.3 = /usr/lib/x86_64-linux-gnu/libgfortran.so.3
 (0x7f1095d18000)
 libm.so.6 = /lib/x86_64-linux-gnu/libm.so.6 (0x7f1095a18000)
 /lib64/ld-linux-x86-64.so.2 (0x7f1098a88000)
 libpthread.so.0 = /lib/x86_64-linux-gnu/libpthread.so.0
 (0x7f10957f8000)
 libquadmath.so.0 = /usr/lib/x86_64-linux-gnu/libquadmath.so.0
 (0x7f10955c)
 $ ls -lh /usr/lib/libblas.so.3gf
 lrwxrwxrwx 1 root root 32 May 11 22:27 /usr/lib/libblas.so.3gf -
 /etc/alternatives/libblas.so.3gf
 $ ls -lh  /etc/alternatives/libblas.so.3gf
 lrwxrwxrwx 1 root root 45 May 11 22:36 /etc/alternatives/libblas.so.3gf
 - /opt/acml5.1.0/gfortran64_fma4/lib/libacml.so



 Cheers
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C++ Example

2012-03-06 Thread Matthieu Brucher
Using either for
 numerical programming usually a mistake.


This is your opinion, but there are a lot of numerical code now in C++ and
they are far more maintainable than in Fortran. And they are faster for
exactly this reason.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C++ Example

2012-03-06 Thread Matthieu Brucher
2012/3/6 Sturla Molden stu...@molden.no

 On 06.03.2012 21:45, Matthieu Brucher wrote:

  This is your opinion, but there are a lot of numerical code now in C++
  and they are far more maintainable than in Fortran. And they are faster
  for exactly this reason.

 That is mostly because C++ makes tasks that are non-numerical easier.


I talk about numerical code, and you talk about non-numerical code. I stand
by my words. It is efficient and more robust than Fortran for everything,
including numerical code.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Matthieu Brucher
 C++11 has this option:

 for (auto item : container) {
 // iterate over the container object,
 // get a reference to each item
 //
 // container can be an STL class or
 // A C-style array with known size.
 }

 Which does this:

 for item in container:
 pass


It is even better than using the macro way because the compiler knows
everything is constant (start and end), so it can do better things.


  Using C++ templates to generate ufunc loops is an obvious application,
  but again, in the simple examples

 Template metaprogramming?

 Don't even think about it. It is brain dead to try to outsmart the
 compiler.


It is really easy to outsmart the compiler. Really. I use metaprogramming
for loop creation to optimize cache behavior, communication in parallel
environments, and there is no way the compiler would have done things as
efficiently (and there is a lot of leeway to enhance my code).

-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Matthieu Brucher
 Would it be fair to say then, that you are expecting the discussion
 about C++ will mainly arise after the Mark has written the code?   I
 can see that it will be easier to specific at that point, but there
 must be a serious risk that it will be too late to seriously consider
 an alternative approach.


 We will need to see examples of what Mark is talking about and clarify
 some of the compiler issues.   Certainly there is some risk that once code
 is written that it will be tempting to just use it.   Other approaches are
 certainly worth exploring in the mean-time, but C++ has some strong
 arguments for it.


Compilers for C++98 are now stable enough (except on Bluegene, see the
Boost distribution with xlc++)
C++ helps a lot to enhance robustness.ts?


 From my perspective having a standalone core NumPy is still a goal.   The
 primary advantages of having a NumPy library (call it NumLib for the sake
 of argument) are

 1) Ability for projects like PyPy, IronPython, and Jython to use it more
 easily
 2) Ability for Ruby, Perl, Node.JS, and other new languages to use the
 code for their technical computing projects.
 3) increasing the number of users who can help make it more solid
 4) being able to build the user-base (and corresponding performance with
 eye-balls from Intel, NVidia, AMD, Microsoft, Google, etc. looking at the
 code).

 The disadvantages I can think of:
  1) More users also means we might risk lowest-commond-denominator
 problems --- i.e. trying to be too much to too many may make it not useful
 for anyone. Also, more users means more people with opinions that might be
 difficult to re-concile.
 2) The work of doing the re-write is not small:  probably at least 6
 person-months
 3) Not being able to rely on Python objects (dictionaries, lists, and
 tuples are currently used in the code-base quite a bit --- though the
 re-factor did show some examples of how to remove this usage).
 4) Handling of Object arrays requires some re-design.

 I'm sure there are other factors that could be added to both lists.

 -Travis



 Thanks a lot for the reply,

 Matthew
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Matthieu Brucher
2012/2/19 Matthew Brett matthew.br...@gmail.com

 Hi,

 On Sat, Feb 18, 2012 at 8:38 PM, Travis Oliphant tra...@continuum.io
 wrote:

  We will need to see examples of what Mark is talking about and clarify
 some
  of the compiler issues.   Certainly there is some risk that once code is
  written that it will be tempting to just use it.   Other approaches are
  certainly worth exploring in the mean-time, but C++ has some strong
  arguments for it.

 The worry as I understand it is that a C++ rewrite might make the
 numpy core effectively a read-only project for anyone but Mark.  Do
 you have any feeling for whether that is likely?


Some of us are C developers, other are C++. It will depend on the
background of each of us.


 How would numpylib compare to libraries like eigen?  How likely do you
 think it would be that unrelated projects would use numpylib rather
 than eigen or other numerical libraries?  Do you think the choice of
 C++ rather than C will influence whether other projects will take it
 up?


I guess that the C++ port may open a door to change the back-end, and
perhaps use Eigen, or ArBB. As those guys (ArBB) wanted to provided a
Python interface compatible with Numpy to their VM, it may be interesting
to be able to change back-ends (although it is limited to one platform and
2 OS).

-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Matthieu Brucher
2012/2/19 Nathaniel Smith n...@pobox.com

 On Sun, Feb 19, 2012 at 9:16 AM, David Cournapeau courn...@gmail.com
 wrote:
  On Sun, Feb 19, 2012 at 8:08 AM, Mark Wiebe mwwi...@gmail.com wrote:
  Is there a specific
  target platform/compiler combination you're thinking of where we can do
  tests on this? I don't believe the compile times are as bad as many
 people
  suspect, can you give some simple examples of things we might do in
 NumPy
  you expect to compile slower in C++ vs C?
 
  Switching from gcc to g++ on the same codebase should not change much
  compilation times. We should test, but that's not what worries me.
  What worries me is when we start using C++ specific code, STL and co.
  Today, scipy.sparse.sparsetools takes half of the build time  of the
  whole scipy, and it does not even use fancy features. It also takes Gb
  of ram when building in parallel.

 I like C++ but it definitely does have issues with compilation times.

 IIRC the main problem is very simple: STL and friends (e.g. Boost) are
 huge libraries, and because they use templates, the entire source code
 is in the header files. That means that as soon as you #include a few
 standard C++ headers, your innocent little source file has suddenly
 become hundreds of thousands of lines long, and it just takes the
 compiler a while to churn through megabytes of source code, no matter
 what it is. (Effectively you recompile some significant fraction of
 STL from scratch on every file, and then throw it away.)


In fact Boost tries to be clean about this. Up to a few minor releases of
GCC, their headers were a mess. When you included something, a lot of
additional code was brought, and the compile-time exploded. But this is no
longer the case. If we restrict the core to a few includes, even with
templates, it should not be long to compile.

-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Matthieu Brucher
2012/2/19 Sturla Molden stu...@molden.no

 Den 19.02.2012 10:28, skrev Mark Wiebe:
 
  Particular styles of using templates can cause this, yes. To properly
  do this kind of advanced C++ library work, it's important to think
  about the big-O notation behavior of your template instantiations, not
  just the big-O notation of run-time. C++ templates have a
  turing-complete language (which is said to be quite similar to
  haskell, but spelled vastly different) running at compile time in
  them. This is what gives template meta-programming in C++ great power,
  but since templates weren't designed for this style of programming
  originally, template meta-programming is not very easy.
 
 

 The problem with metaprogramming is that we are doing manually the work
 that belongs to the compiler. Blitz++ was supposed to be a library that
 thought like a compiler. But then compilers just got better. Today, it
 is no longer possible for a numerical library programmer to outsmart an
 optimizing C++ compiler. All metaprogramming can do today is produce
 error messages noone can understand. And the resulting code will often
 be slower because the compiler has less opportunities to do its work.


As I've said, the compiler is pretty much stupid. It cannot do what
Blitzz++ did, or what Eigen is currently doing, mainly because of the basis
different languages (C or C++).

-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-20 Thread Matthieu Brucher
2012/2/20 Daniele Nicolodi dani...@grinta.net

 On 18/02/12 04:54, Sturla Molden wrote:
  This is not true. C++ can be much easier, particularly for those who
  already know Python. The problem: C++ textbooks teach C++ as a subset
  of C. Writing C in C++ just adds the complexity of C++ on top of C,
  for no good reason. I can write FORTRAN in any language, it does not
  mean it is a good idea. We would have to start by teaching people to
  write good C++.  E.g., always use the STL like Python built-in types
  if possible. Dynamic memory should be std::vector, not new or malloc.
  Pointers should be replaced with references. We would have to write a
  C++ programming tutorial that is based on Pyton knowledge instead of
  C knowledge.

 Hello Sturla,

 unrelated to the numpy tewrite debate, can you please suggest some
 resources you think can be used to learn how to program C++ the proper
 way?


One of the best books may be Accelerated C++ or the new Stroutrup's book
(not the C++ language)

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] strange conversion integer to float

2011-12-17 Thread Matthieu Brucher
Hi,

If I remember correctly, float is a double (precision float). The precision
is more important in doubles (float64) than in usual floats (float32). And
20091231 can not be reprensented in 32bits floats.

Matthieu

2011/12/17 Alex van Houten sparrow2...@yahoo.com

 Try this:
 $ python
 Python 2.7.1 (r271:86832, Apr 12 2011, 16:15:16)
 [GCC 4.6.0 20110331 (Red Hat 4.6.0-2)] on linux2
 Type help, copyright, credits or license for more information.
  import numpy as np
  np.__version__
 '1.5.1'
  time=[]
  time.append(20091231)
  time_array=np.array(time,'f')
  time_array
 array([ 20091232.], dtype=float32)
 20091231---20091232 Why?
 Note:
  float(20091231)
 20091231.0
 Thanks,
 Alex.

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is distributing GPL + exception dll in the windows installer ok

2011-10-30 Thread Matthieu Brucher
Hi David,

Is every GPL part GCC related? If yes, GCC has a licence that allows to
redistribute its runtime in any program (meaning the program's licence is
not relevant).

Cheers,

Matthieu

2011/10/30 David Cournapeau courn...@gmail.com

 Hi,

 While testing the mingw gcc 3.x - 4.x migration, I realized that some
 technical requirements in gcc 4.x have potential license implications.
 In short, it is more difficult now than before to statically link
 gcc-related runtimes into numpy/scipy. I think using the DLL is safer
 and better, but it means the windows installers will contain GPL code.
 My understanding is that this is OK because the code in question is
 GPL + exception, meaning the usual GPL requirements only apply to
 those runtimes, and that's ok ?

 cheers,

 David
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy - MKL - build error

2011-09-14 Thread Matthieu Brucher
It seems you are missing libiomp5.so, which is sound if you re using the
whole Composer package: the needed libs are split in two different
locations, and unfortunately, Numpy cannot cope with this last time I
checked (I think it was one of the reasons David Cournapeau created numscons
and bento).

Matthieu

2011/9/14 Igor Ying igor.y...@yahoo.com

 Yes, they all are present in that directory.  Also, I tried with root as
 login.

 -r-xr-xr-x 1 root root  26342559 Aug  9 22:19 libmkl_avx.so
 -r--r--r-- 1 root root   1190224 Aug  9 22:26 libmkl_blacs_ilp64.a
 -r--r--r-- 1 root root   1191496 Aug  9 22:25 libmkl_blacs_intelmpi_ilp64.a
 -r-xr-xr-x 1 root root497597 Aug  9 22:25
 libmkl_blacs_intelmpi_ilp64.so
 -r--r--r-- 1 root root676206 Aug  9 22:21 libmkl_blacs_intelmpi_lp64.a
 -r-xr-xr-x 1 root root267010 Aug  9 22:21 libmkl_blacs_intelmpi_lp64.so
 -r--r--r-- 1 root root674926 Aug  9 22:22 libmkl_blacs_lp64.a
 -r--r--r-- 1 root root   1218290 Aug  9 22:28 libmkl_blacs_openmpi_ilp64.a
 -r--r--r-- 1 root root703042 Aug  9 22:23 libmkl_blacs_openmpi_lp64.a
 -r--r--r-- 1 root root   1191152 Aug  9 22:29 libmkl_blacs_sgimpt_ilp64.a
 -r--r--r-- 1 root root675854 Aug  9 22:23 libmkl_blacs_sgimpt_lp64.a
 -r--r--r-- 1 root root425802 Aug  9 20:44 libmkl_blas95_ilp64.a
 -r--r--r-- 1 root root421410 Aug  9 20:44 libmkl_blas95_lp64.a
 -r--r--r-- 1 root root144354 Aug  9 22:29 libmkl_cdft_core.a
 -r-xr-xr-x 1 root root115588 Aug  9 22:29 libmkl_cdft_core.so
 -r--r--r-- 1 root root 231886824 Aug  9 22:07 libmkl_core.a
 -r-xr-xr-x 1 root root  16730033 Aug  9 22:18 libmkl_core.so
 -r-xr-xr-x 1 root root  21474555 Aug  9 22:18 libmkl_def.so
 -r--r--r-- 1 root root  14974574 Aug  9 22:06 libmkl_gf_ilp64.a
 -r-xr-xr-x 1 root root   7008828 Aug  9 22:48 libmkl_gf_ilp64.so
 -r--r--r-- 1 root root  15140998 Aug  9 22:06 libmkl_gf_lp64.a
 -r-xr-xr-x 1 root root   7055304 Aug  9 22:48 libmkl_gf_lp64.so
 -r--r--r-- 1 root root  16435120 Aug  9 22:07 libmkl_gnu_thread.a
 -r-xr-xr-x 1 root root   9816940 Aug  9 22:49 libmkl_gnu_thread.so
 -r--r--r-- 1 root root  14968130 Aug  9 22:06 libmkl_intel_ilp64.a
 -r-xr-xr-x 1 root root   7008368 Aug  9 22:48 libmkl_intel_ilp64.so
 -r--r--r-- 1 root root  15134406 Aug  9 22:06 libmkl_intel_lp64.a
 -r-xr-xr-x 1 root root   7053588 Aug  9 22:48 libmkl_intel_lp64.so
 -r--r--r-- 1 root root   2472940 Aug  9 22:07 libmkl_intel_sp2dp.a
 -r-xr-xr-x 1 root root   1191479 Aug  9 22:20 libmkl_intel_sp2dp.so
 -r--r--r-- 1 root root  27642508 Aug  9 22:07 libmkl_intel_thread.a
 -r-xr-xr-x 1 root root  17516608 Aug  9 22:49 libmkl_intel_thread.so
 -r--r--r-- 1 root root   5350948 Aug  9 20:44 libmkl_lapack95_ilp64.a
 -r--r--r-- 1 root root   5413476 Aug  9 20:44 libmkl_lapack95_lp64.a
 -r-xr-xr-x 1 root root  29543829 Aug  9 22:19 libmkl_mc3.so
 -r-xr-xr-x 1 root root  25428037 Aug  9 22:19 libmkl_mc.so
 -r-xr-xr-x 1 root root  22888659 Aug  9 22:18 libmkl_p4n.so
 -r--r--r-- 1 root root  19232716 Aug  9 22:07 libmkl_pgi_thread.a
 -r-xr-xr-x 1 root root  12243062 Aug  9 22:49 libmkl_pgi_thread.so
 -r-xr-xr-x 1 root root   4984870 Aug  9 22:49 libmkl_rt.so
 -r--r--r-- 1 root root  10367758 Aug  9 22:49 libmkl_scalapack_ilp64.a
 -r-xr-xr-x 1 root root   6574928 Aug  9 22:50 libmkl_scalapack_ilp64.so
 -r--r--r-- 1 root root  10292432 Aug  9 22:49 libmkl_scalapack_lp64.a
 -r-xr-xr-x 1 root root   6452627 Aug  9 22:50 libmkl_scalapack_lp64.so
 -r--r--r-- 1 root root   9958444 Aug  9 22:07 libmkl_sequential.a
 -r-xr-xr-x 1 root root   5926347 Aug  9 22:48 libmkl_sequential.so
 -r--r--r-- 1 root root  1048 Aug  9 16:50 libmkl_solver_ilp64.a
 -r--r--r-- 1 root root  1048 Aug  9 16:50
 libmkl_solver_ilp64_sequential.a
 -r--r--r-- 1 root root  1048 Aug  9 16:50 libmkl_solver_lp64.a
 -r--r--r-- 1 root root  1048 Aug  9 16:50
 libmkl_solver_lp64_sequential.a
 -r-xr-xr-x 1 root root   6711968 Aug  9 22:48 libmkl_vml_avx.so
 -r-xr-xr-x 1 root root   2795928 Aug  9 22:47 libmkl_vml_def.so
 -r-xr-xr-x 1 root root   5476786 Aug  9 22:48 libmkl_vml_mc2.so
 -r-xr-xr-x 1 root root   5778052 Aug  9 22:48 libmkl_vml_mc3.so
 -r-xr-xr-x 1 root root   5382511 Aug  9 22:48 libmkl_vml_mc.so
 -r-xr-xr-x 1 root root   4235841 Aug  9 22:48 libmkl_vml_p4n.so
 drwxr-xr-x 3 root root  4096 Aug 18 11:43 locale
 Y


 you can reach the person managing the list at
 numpy-discussion-ow...@scipy.org


 Message: 1
 Date: Tue, 13 Sep 2011 09:58:27 -0400
 From: Olivier Delalleau sh...@keba.be
 Subject: Re: [Numpy-discussion] Numpy - MKL - build error
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID:
 cafxk4bpdn7qcwmze2g565gsuontau7lmh4fgsth_zy_r+nl...@mail.gmail.com
 Content-Type: text/plain; charset=iso-8859-1


 Sorry if it sounds like a stupid question, but are the files listed in the
 error message present in that directory?
 If yes, maybe try running the command with sudo, just in case it would be
 some weird permission issue.

 -=- Olivier

 

Re: [Numpy-discussion] Strange behavior of operator *=

2011-04-05 Thread Matthieu Brucher
Indeed, it is not. In the first case, you keep your original object and each
(integer) element is multiplied by 1.0. In the second example, you are
creating a temporary object a*x, and as x is a float and a an array of
integers, the result will be an array of floats, which will be assigned to
a.

Matthieu

2011/4/5 François Steinmetz francois.steinm...@gmail.com

 Hi all,

 I have encountered the following strangeness :
  from numpy import *
  __version__
 '1.5.1'
  a = eye(2, dtype='int')
  a *= 1.0
  a ; a.dtype
 array([[1, 0],
[0, 1]])
 dtype('int64')
  a = a * 1.0
  a ; a.dtype
 array([[ 1.,  0.],
[ 0.,  1.]])
 dtype('float64')

 So, in this case a *= x is not equivalent to a = a*x ?

 François

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] bug with numpy 2 ** N

2011-03-23 Thread Matthieu Brucher
Hi,

I don't thnk this is a bug. You are playign with C integers, not Python
integers, and the former are limited. It's a common feature in all
processors (even DSPs).

Matthieu

2011/3/23 Dmitrey tm...@ukr.net

   2**64
 18446744073709551616L
  2**array(64)
 -9223372036854775808
  2**100
 1267650600228229401496703205376L
  2**array(100)
 -9223372036854775808

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Nonzero behaving strangely?

2011-03-17 Thread Matthieu Brucher
Hi,

Did you try np.where(res[:,4]==2) ?

Matthieu

2011/3/17 santhu kumar mesan...@gmail.com

 Hello all,

 I am new to Numpy. I used to program before in matlab and am getting used
 to Numpy.

 I have a array like:
 res
 array([[ 33.35053669,  49.4615004 ,  44.27631299,   1.,   2.
 ],
[ 32.84263059,  50.24752036,  43.92291659,   1.,   0.
 ],
[ 33.68999668,  48.90554673,  43.51746687,   1.,   0.
 ],
[ 34.11564931,  49.77487763,  44.83843076,   1.,   0.
 ],
[ 32.4641859 ,  48.65469145,  45.09300791,   1.,   3.
 ],
[ 32.15428526,  49.26922262,  45.92959026,   1.,   0.
 ],
[ 31.23860825,  48.21824628,  44.30816331,   1.,   0.
 ],
[ 30.71171138,  47.45600573,  44.9282456 ,   1.,   0.
 ],
[ 30.53843426,  49.07713258,  44.20899822,   1.,   0.
 ],
[ 31.54722284,  47.61953925,  42.95235178,   1.,   0.
 ],
[ 32.44334635,  48.10500653,  42.51103537,   1.,   0.
 ],
[ 31.77269609,  46.53603145,  43.06468455,   1.,   0.
 ],
[ 30.1820843 ,  47.80819604,  41.77667819,   1.,   0.
 ],
[ 30.78652668,  46.82907769,  40.38586451,   1.,   0.
 ],
[ 30.05963091,  46.84268609,  39.54583693,   1.,   0.
 ],
[ 31.75239177,  47.22768463,  40.00717713,   1.,   0.
 ],
[ 30.94617127,  45.76986265,  40.68226643,   1.,   0.
 ],
[ 33.20069679,  47.42127403,  45.66738249,   1.,   0.
 ],
[ 34.39608116,  47.25481126,  45.4438599 ,   1.,   0.
 ]])

 The array is 19X5.
 When I do:
 nid = (res[:,4]==2).nonzero()
 nid tuple turns out to be empty. But the very first row satisfies the
 criteria.

 nid = (res[:,4]==3).nonzero(), works out and finds the 5th row.

 Am i doing something wrong?
 I basically want to find the rows whose fifth coloumn(4th in numpy matrix
 format) is 2.

 Any suggestions?
 Thanks
 Santhosh

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fortran was dead ... [was Re: rewriting NumPy code in C or C++ or similar]

2011-03-15 Thread Matthieu Brucher

 C++ templates maks binaries almost impossible to debug.


Never had an issue with this and all my number crunching code is done
through metaprogramming (with vectorization, cache blocking...) So I have a
lot of complex template structures, and debugging them is easy.
Then, if someone doesn't want to do fancy stuff and uses templates a la
Fortran, it's _very_ easy to debug.

BTW, instead of Blitzz++, you have vigra and Eigen that are the new
equivalent libraries, and you may want to keep an eye on Intel's ArBB.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fortran was dead ... [was Re: rewriting NumPy code in C or C++ or similar]

2011-03-14 Thread Matthieu Brucher
Hi,

Intel Fortran is an excellent Fortran compiler. Why is Fortran still better
than C and C++?
- some rules are different, like arrays passed to functions are ALWAYS
supposed to be independent in Fortran, whereas in C, you have to add a
restrict keyword
- due to the last fact, Fortran is a language where its compiler could do
more work (vectorization, autoparallelization...)
- Fortran 95 has an excellent array support, which is not currently
available in C/C++ (perhaps with ArBB?)

Nevertheless, when you know C++ correctly, when you want to do something
really efficient, you don't use Fortran. You can be as efficient as in C++,
and you can do fancy stuff (I/O, network...). Class and templates are also
better supported in C++.

Matthieu

2011/3/14 Sebastian Haase seb.ha...@gmail.com

 On Mon, Mar 14, 2011 at 9:24 PM, Ondrej Certik ond...@certik.cz wrote:
  Hi Sturla,
 
  On Tue, Mar 8, 2011 at 6:25 AM, Sturla Molden stu...@molden.no wrote:
  Den 08.03.2011 05:05, skrev Dan Halbert:
  Thanks, that's a good suggestion. I have not written Fortran since
 1971,
  but it's come a long way. I was a little worried about the row-major vs
  column-major issue, but perhaps that can be handled just by remembering
  to reverse the subscript order between C and Fortran.
 
  In practice this is not a problem. Most numerical libraries for C assume
  Fortran-ordering, even OpenGL assumes Fortran-ordering. People program
  MEX files for Matlab in C all the time. Fortran-ordering is assumed in
  MEX files too.
 
  In ANSI C, array bounds must be known at compile time, so a Fortran
  routine with the interface
 
  subroutine foobar( lda, A )
  integer lda
  double precision A(lda,*)
  end subroutine
 
  will usually be written like
 
  void foobar( int lda, double A[]);
 
  in C, ignoring different calling convention for lda.
 
  Now we would index A(row,col) in Fortran and A[row + col*lda] in C. Is
  that too difficult to remember?
 
  In ANSI C the issue actually only arises with small array of arrays
  having static shape, or convoluted contructs like pointer to an array
  of pointers to arrays. Just avoid those and stay with 1D arrays in C --
  do the 1D to 2D mapping in you mind.
 
  In C99 arrays are allowed to have dynamic size, which mean we can do
 
 void foobar( int lda, double *pA )
 {
typedef double row_t [lda];
vector_t *A = (vector_t*)((void*)pA[0]);
 
  Here we must index A[k][i] to match A(i,k) in Fortran. I still have not
  seen anyone use C99 like this, so I think it is merely theoretical.
 
  Chances are if you know how to do this with C99, you also know how to
  get the subscripts right. If you are afraid to forget to reverse the
  subscript order between C and Fortran, it just tells me you don't
  really know what you are doing when using C, and should probably use
  something else.
 
  Why not Cython? It has native support for NumPy arrays.
 
  Personally I prefer Fortran 95, but that is merely a matter of taste.
 
  +1 to all that you wrote about Fortran. I am pretty much switching to
  it from C/C++ for all my numerical work, and then I use Cython to call
  it from Python, as well as cmake for the build system.
 
  Ondrej


 Hi,
 this is quite amazing...
 Sturla has been writing so much about Fortran recently, and Ondrej now
 says he has done the move from C/C++ to Fortran -- I thought Fortran
 was dead ... !?   ;-)
 What am I missing here ?
 Apparently (from what I was able to read-up so far) there is a BIG
 difference between FORTRAN 77 and F95.
 But isn't gcc or gfortran still only supporting F77 ?
 How about IntelCC's Fortran ?  Is that superior?
 Do you guys have any info / blogs / docs where one could get an
 up-to-date picture?
 Like:
 1. How about debugging - does gdb work or is there somthing better ?
 2. How is the move of the F77 community to F95 in general ?   How many
 people / projects are switching.
 3. Or is the move rather going like Fortran 77 - C - Python -
 Fortran 95   !?  ;-)

 Thanks,
 Sebastian Haase
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Small typo in fromregex

2011-02-28 Thread Matthieu Brucher
Hi,

I'm sorry I didn't file a bug, I have some troubles getting my old trac
account back :|

In lib/npyio.py, there is a mistake line 1029.
Instead on fh.close(), it should have been file.close(). If fromregex opens
the file, it will crash because the name of the file is not correct.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OT: performance in C extension; OpenMP, or SSE ?

2011-02-17 Thread Matthieu Brucher
 Do you think, one could get even better ?
 And, where does the 7% slow-down (for single thread) come from ?
 Is it possible to have the OpenMP option in a code, without _any_
 penalty for 1 core machines ?


There will always be a penalty for parallel code that runs on one core. You
have at least the overhead for splitting the data.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OT: performance in C extension; OpenMP, or SSE ?

2011-02-17 Thread Matthieu Brucher
 Then, where does the overhead come from ? --
 The call toomp_set_dynamic(dynamic);
 Or the
 #pragma omp parallel for private(j, i,ax,ay, dif_x, dif_y)


It may be this. You initialize a thread pool, even if it has only one
thread, and there is the dynamic part, so OpenMP may create several chunks
instead of one big chunk.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OT: performance in C extension; OpenMP, or SSE ?

2011-02-17 Thread Matthieu Brucher
It may also be the sizes of the chunk OMP uses. You can/should specify
them.in the OMP pragma so that it is a multiple of the cache line size or
something close.

Matthieu

2011/2/17 Sebastian Haase seb.ha...@gmail.com

 Hi,
 More surprises:
 shaase@iris:~/code/SwiggedDistOMP: gcc -O3 -c the_lib.c -fPIC -fopenmp
 -ffast-math
 shaase@iris:~/code/SwiggedDistOMP: gcc -shared -o the_lib.so the_lib.o
 -lgomp -lm
 shaase@iris:~/code/SwiggedDistOMP: priithon the_python_prog.py
 c_threads 0  time  0.000437839031219# this is now, without
 #pragma omp parallel for ...
 c_threads 1  time  0.000865449905396
 c_threads 2  time  0.000520548820496
 c_threads 3  time  0.00033704996109
 c_threads 4  time  0.000620169639587
 c_threads 5  time  0.000465350151062
 c_threads 6  time  0.000696349143982

 This correct now the timing of, max OpenMP speed (3 threads) vs. no
 OpenMP to speedup of (only!) 1.3x
 Not 2.33x (which was the number I got when comparing OpenMP to the
 cdist function).
 The c code is now:

 the_lib.c

 --
 #include stdio.h
 #include time.h
 #include omp.h
 #include math.h

 void dists2d(  double *a_ps, int na,
  double *b_ps, int nb,
  double *dist, int num_threads)
 {

   int i, j;
double ax,ay, dif_x, dif_y;
   int nx1=2;
   int nx2=2;

if(num_threads0)
  {
   int dynamic=0;
   omp_set_dynamic(dynamic);
omp_set_num_threads(num_threads);


 #pragma omp parallel for private(j, i,ax,ay, dif_x, dif_y)
for(i=0;ina;i++)
 {
   ax=a_ps[i*nx1];
ay=a_ps[i*nx1+1];
   for(j=0;jnb;j++)
 { dif_x = ax - b_ps[j*nx2];
dif_y = ay - b_ps[j*nx2+1];
dist[2*i+j]  = sqrt(dif_x*dif_x+dif_y*dif_y);
 }
 }
  } else {
for(i=0;ina;i++)
 {
   ax=a_ps[i*nx1];
ay=a_ps[i*nx1+1];
   for(j=0;jnb;j++)
 { dif_x = ax - b_ps[j*nx2];
dif_y = ay - b_ps[j*nx2+1];
dist[2*i+j]  = sqrt(dif_x*dif_x+dif_y*dif_y);
 }
 }
   }
 }
 --
 $ gcc -O3 -c the_lib.c -fPIC -fopenmp -ffast-math
 $ gcc -shared -o the_lib.so the_lib.o -lgomp -lm

 So, I guess I found a way of getting rid of the OpenMP overhead when
 run with 1 thread,
 and found that - if measured correctly, using same compiler settings
 and so on - the speedup is so small that there no point in doing
 OpenMP - again.
 (For my case, having (only) 4 cores)


 Cheers,
 Sebastian.



 On Thu, Feb 17, 2011 at 10:57 AM, Matthieu Brucher
 matthieu.bruc...@gmail.com wrote:
 
  Then, where does the overhead come from ? --
  The call toomp_set_dynamic(dynamic);
  Or the
  #pragma omp parallel for private(j, i,ax,ay, dif_x, dif_y)
 
  It may be this. You initialize a thread pool, even if it has only one
  thread, and there is the dynamic part, so OpenMP may create several
 chunks
  instead of one big chunk.
 
  Matthieu
  --
  Information System Engineer, Ph.D.
  Blog: http://matt.eifelle.com
  LinkedIn: http://www.linkedin.com/in/matthieubrucher
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OT: performance in C extension; OpenMP, or SSE ?

2011-02-15 Thread Matthieu Brucher
Hi,

My first move would be to add a restrict keyword to dist (i.e. dist is the
only pointer to the specific memory location), and then declare dist_ inside
the first loop also with a restrict.
Then, I would run valgrind or a PAPI profil on your code to see what causes
the issue (false sharing, ...)

Matthieu

2011/2/15 Sebastian Haase seb.ha...@gmail.com

 Hi,
 I assume that someone here could maybe help me, and I'm hoping it's
 not too much off topic.
 I have 2 arrays of 2d point coordinates and would like to calculate
 all pairwise distances as fast as possible.
 Going from Python/Numpy to a (Swigged) C extension already gave me a
 55x speedup.
 (.9ms vs. 50ms for arrays of length 329 and 340).
 I'm using gcc on Linux.
 Now I'm wondering if I could go even faster !?
 My hope that the compiler might automagically do some SSE2
 optimization got disappointed.
 Since I have a 4 core CPU I thought OpenMP might be an option;
 I never used that, and after some playing around I managed to get
 (only) 50% slowdown(!) :-(

 My code in short is this:
 (My SWIG typemaps use obj_to_array_no_conversion() from numpy.i)
 ---Ccode --
 void dists2d(
   double *a_ps, int nx1, int na,
   double *b_ps, int nx2, int nb,
   double *dist, int nx3, int ny3)  throw (char*)
 {
  if(nx1 != 2)  throw (char*) a must be of shape (n,2);
  if(nx2 != 2)  throw (char*) b must be of shape (n,2);
  if(nx3 != nb || ny3 != na)throw (char*) c must be of shape (na,nb);

  double *dist_;
  int i, j;

 #pragma omp parallel private(dist_, j, i)
  {
 #pragma omp for nowait
for(i=0;ina;i++)
  {
//num_threads=omp_get_num_threads();  -- 4
dist_ = dist+i*nb; // dists_  is  only
 introduced for OpenMP
for(j=0;jnb;j++)
  {
*dist_++  = sqrt( sq(a_ps[i*nx1]   - b_ps[j*nx2]) +
  sq(a_ps[i*nx1+1] -
 b_ps[j*nx2+1]) );
  }
  }
  }
 }
 ---/Ccode --
 There is probably a simple mistake in this code - as I said I never
 used OpenMP before.
 It should be not too difficult to use OpenMP correctly here
  or -  maybe better -
 is there a simple SSE(2,3,4) version that might be even better than
 OpenMP... !?

 I supposed, that I did not get the #pragma omp lines right - any idea ?
 Or is it in general not possible to speed this kind of code up using OpenMP
 !?

 Thanks,
 Sebastian Haase
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OT: performance in C extension; OpenMP, or SSE ?

2011-02-15 Thread Matthieu Brucher
Use directly restrict in C99 mode (__restrict does not have exactly the same
semantics).

For a valgrind profil, you can check my blog (
http://matt.eifelle.com/2009/04/07/profiling-with-valgrind/)
Basically, if you have a python script, you can valgrind --optionsinmyblog
python myscript.py

For PAPI, you have to install several packages (perf module for kernel for
instance) and a GUI to analyze the results (in Eclispe, it should be
possible).

Matthieu

2011/2/15 Sebastian Haase seb.ha...@gmail.com

 Thanks Matthieu,
 using __restrict__ with g++ did not change anything. How do I use
 valgrind with C extensions?
 I don't know what PAPI profil is ...?
 -Sebastian


 On Tue, Feb 15, 2011 at 4:54 PM, Matthieu Brucher
 matthieu.bruc...@gmail.com wrote:
  Hi,
  My first move would be to add a restrict keyword to dist (i.e. dist is
 the
  only pointer to the specific memory location), and then declare dist_
 inside
  the first loop also with a restrict.
  Then, I would run valgrind or a PAPI profil on your code to see what
 causes
  the issue (false sharing, ...)
  Matthieu
 
  2011/2/15 Sebastian Haase seb.ha...@gmail.com
 
  Hi,
  I assume that someone here could maybe help me, and I'm hoping it's
  not too much off topic.
  I have 2 arrays of 2d point coordinates and would like to calculate
  all pairwise distances as fast as possible.
  Going from Python/Numpy to a (Swigged) C extension already gave me a
  55x speedup.
  (.9ms vs. 50ms for arrays of length 329 and 340).
  I'm using gcc on Linux.
  Now I'm wondering if I could go even faster !?
  My hope that the compiler might automagically do some SSE2
  optimization got disappointed.
  Since I have a 4 core CPU I thought OpenMP might be an option;
  I never used that, and after some playing around I managed to get
  (only) 50% slowdown(!) :-(
 
  My code in short is this:
  (My SWIG typemaps use obj_to_array_no_conversion() from numpy.i)
  ---Ccode --
  void dists2d(
double *a_ps, int nx1, int na,
double *b_ps, int nx2, int nb,
double *dist, int nx3, int ny3)  throw (char*)
  {
   if(nx1 != 2)  throw (char*) a must be of shape (n,2);
   if(nx2 != 2)  throw (char*) b must be of shape (n,2);
   if(nx3 != nb || ny3 != na)throw (char*) c must be of shape
 (na,nb);
 
   double *dist_;
   int i, j;
 
  #pragma omp parallel private(dist_, j, i)
   {
  #pragma omp for nowait
 for(i=0;ina;i++)
   {
 //num_threads=omp_get_num_threads();  -- 4
 dist_ = dist+i*nb; // dists_  is  only
  introduced for OpenMP
 for(j=0;jnb;j++)
   {
 *dist_++  = sqrt( sq(a_ps[i*nx1]   - b_ps[j*nx2])
 +
 
  sq(a_ps[i*nx1+1]
  - b_ps[j*nx2+1]) );
   }
   }
   }
  }
  ---/Ccode --
  There is probably a simple mistake in this code - as I said I never
  used OpenMP before.
  It should be not too difficult to use OpenMP correctly here
   or -  maybe better -
  is there a simple SSE(2,3,4) version that might be even better than
  OpenMP... !?
 
  I supposed, that I did not get the #pragma omp lines right - any idea ?
  Or is it in general not possible to speed this kind of code up using
  OpenMP !?
 
  Thanks,
  Sebastian Haase
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 
  --
  Information System Engineer, Ph.D.
  Blog: http://matt.eifelle.com
  LinkedIn: http://www.linkedin.com/in/matthieubrucher
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] mkl 10.3 linking - undefined symbol: i_free?

2011-02-13 Thread Matthieu Brucher
Hi,

This pops up regularly here, you can search with Google and find this page:
http://matt.eifelle.com/2008/11/03/i-used-the-latest-mkl-with-numpy-and/

Matthieu

2011/2/13 Andrzej Giniewicz ggi...@gmail.com

 Hello,

 I'd like to ask if anyone got around the undefined symbol i_free
 issue? What I did was that I used link advisor from
 file:///home/giniu/Downloads/MKL_Linking_Adviser-1.03.htm and it told
 me to use

 -L$MKLPATH $MKLPATH/libmkl_solver_lp64.a -Wl,--start-group
 -lmkl_gf_lp64 -lmkl_gnu_thread -lmkl_core -Wl,--end-group -fopenmp
 -lpthread

 So I created site.cfg like:

 library_dirs = /opt/intel/composerxe-2011.2.137/mkl/lib/intel64
 include_dirs = /opt/intel/composerxe-2011.2.137/mkl/include
 mkl_libs = mkl_gf_lp64, mkl_gnu_thread, mkl_core
 lapack_libs = mkl_lapack95_lp64

 and added -fopenmp to LDFLAGS. Numpy built, then I was able to
 install. I started python and imported numpy, and till this point it
 worked. When I tried to run numpy.test() and got:

 Running unit tests for numpy
 NumPy version 1.5.1
 NumPy is installed in /usr/lib/python2.7/site-packages/numpy
 Python version 2.7.1 (r271:86832, Feb  7 2011, 19:39:54) [GCC 4.5.2
 20110127 (prerelease)]
 nose version 1.0.0
 .

 *** libmkl_def.so *** failed with error :
 /opt/intel/composerxe-2011.2.137/mkl/lib/intel64/libmkl_def.so:
 undefined symbol: i_free
 MKL FATAL ERROR: Cannot load libmkl_def.so

 Anyone who got this working could give me some hint about how to get
 around it? I would be grateful

 Cheers,
 Andrzej.
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is numpy/scipy linux apt or PYPI installation linked with ACML?

2011-01-23 Thread Matthieu Brucher
I think the main issue is that ACML didn't have an official CBLAS interface,
so you have to check if they provide one now. If thy do, it should be almost
easy to link against it.

Matthieu

2011/1/23 David Cournapeau courn...@gmail.com

 2011/1/23 Dmitrey tm...@ukr.net:
  Hi all,
  I have AMD processor and I would like to get to know what's the easiest
 way
  to install numpy/scipy linked with ACML.
  Is it possible to link linux apt or PYPI installation linked with ACML?
  Answer for the same question about MKL also would be useful, however,
 AFAIK
  it has commercial license and thus can't be handled in the ways.

 For the MKL, the easiest solution is to get EPD, or to build
 numpy/scipy by yourself, although the later is not that easy. For
 ACML, I don't know how difficult it is, but I would be surprised if it
 worked out of the box.

 cheers,

 David
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Why arange has no stop-point opt-in?

2010-12-30 Thread Matthieu Brucher
2010/12/30 K.-Michael Aye kmichael@gmail.com:
 On 2010-12-30 16:43:12 +0200, josef.p...@gmail.com said:


 Since linspace exists, I don't see much point in adding the stop point
 in arange. I use arange mainly for integers as numpy equivalent of
 python's range. And I often need arange(n+1) which is less writing
 than arange(n, include_end_point=True)

 I agree with the point of writing gets more in some cases.
 But arange(a, n+1, 0.1) would of course fail in this case.
 And the big difference is, that I need to calculate first how many
 steps it is for linspace to achieve what I believe is a frequent user
 case.
 As we already have the 'convenience' of both linspace and arange, which
 in principle could be done by one function alone if we'd precalculate
 all required information ourselves, why not go the full way, and take
 all overhead away from the user?

I think arange() should really be seen as just the numpy version of range().
The issue with including the stop point is that it well may be the
case when you do arange(0, 1, 0.1). It's just a matter of loat
precision. In this case, I think the safest course of action is to let
the user decide how it can handle this.
If the step can be expressed as a rational fraction, then using arange
with floats and a step of one, it may be the simplest way to achieve
what you want.
i.e. : np.arange(90., 150.+1) / 10

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Matthieu Brucher
2010/11/23 Zachary Pincus zachary.pin...@yale.edu:

 On Nov 23, 2010, at 10:57 AM, Gael Varoquaux wrote:

 On Tue, Nov 23, 2010 at 04:33:00PM +0100, Sebastian Walter wrote:
 At first glance it looks as if a relaxation is simply not possible:
 either there are additional rows or not.
 But with some technical transformations it is possible to reformulate
 the problem into a form that allows the relaxation of the integer
 constraint in a natural way.

 Maybe this is also possible in your case?

 Well, given that it is a cross-validation score that I am optimizing,
 there is not simple algorithm giving this score, so it's not obvious
 at
 all that there is a possible relaxation. A road to follow would be to
 find an oracle giving empirical risk after estimation of the penalized
 problem, and try to relax this oracle. That's two steps further than
 I am
 (I apologize if the above paragraph is incomprehensible, I am
 getting too
 much in the technivalities of my problem.

 Otherwise, well, let me know if you find a working solution ;)

 Nelder-Mead seems to be working fine, so far. It will take a few weeks
 (or more) to have a real insight on what works and what doesn't.

 Jumping in a little late, but it seems that simulated annealing might
 be a decent method here: take random steps (drawing from a
 distribution of integer step sizes), reject steps that fall outside
 the fitting range, and accept steps according to the standard
 annealing formula.

There is also a simulated-annealing modification of Nelder Mead that
can be of use.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-23 Thread Matthieu Brucher
2010/11/24 Gael Varoquaux gael.varoqu...@normalesup.org:
 On Tue, Nov 23, 2010 at 07:14:56PM +0100, Matthieu Brucher wrote:
  Jumping in a little late, but it seems that simulated annealing might
  be a decent method here: take random steps (drawing from a
  distribution of integer step sizes), reject steps that fall outside
  the fitting range, and accept steps according to the standard
  annealing formula.

 There is also a simulated-annealing modification of Nelder Mead that
 can be of use.

 Sounds interesting. Any reference?

Not right away, I have to check. The main difference is the possible
acceptance of a contraction that doesn't lower the cost, and this is
done with a temperature like simulated annealing.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-22 Thread Matthieu Brucher
2010/11/22 Gael Varoquaux gael.varoqu...@normalesup.org:
 Hi list,

Hi ;)

 does anybody have, or knows where I can find some N dimensional dichotomy 
 optimization code in Python (BSD licensed, or equivalent)?

I don't know any code, but it should be too difficult by bgoing
through a KdTree.

 Worst case, it does not look too bad to code, but I am interested by any 
 advice. I haven't done my reading yet, and I don't know how ill-posed a 
 problem it is. I had in mind starting from a set of points and iterating the 
 computation of the objective function's value at the barycenters of these 
 points, and updating this list of points. This does raise a few questions on 
 what are the best possible updates.

In this case, you may want to check Nelder-Mead algotihm (also known
as down-hill simplex or polytope), which is available in
scikits.optimization, but there are other implementations out there.

Cheers ;)

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-22 Thread Matthieu Brucher
2010/11/22 Gael Varoquaux gael.varoqu...@normalesup.org:
 On Mon, Nov 22, 2010 at 09:12:45PM +0100, Matthieu Brucher wrote:
 Hi ;)

 Hi bro

  does anybody have, or knows where I can find some N dimensional
  dichotomy optimization code in Python (BSD licensed, or equivalent)?

 I don't know any code, but it should be too difficult by bgoing
 through a KdTree.

 I am not in terribly high-dimensional spaces, so I don't really need to
 use a KdTree (but we do happen to have a nice BallTree available in the
 scikit-learn, so I could use it just to  play :).

:D

 In this case, you may want to check Nelder-Mead algotihm (also known
 as down-hill simplex or polytope), which is available in
 scikits.optimization, but there are other implementations out there.

 Interesting reference. I had never looked at the Nelder-Mead algorithm.
 I am wondering if it does what I want, thought.

 The reason I am looking at dichotomy optimization is that the objective
 function that I want to optimize has local roughness, but is globally
 pretty much a bell-shaped curve. Dichotomy looks like it will get quite
 close to the top of the curve (and I have been told that it works well on
 such problems). One thing that is nice with dichotomy for my needs is
 that it is not based on a gradient, and it works in a convex of the
 parameter space.

It seems that a simplex is what you need. It uses the barycenter (more
or less) to find a new point in the simplex. And it works well only in
convex functions (but in fact almost all functions have an issue with
this :D)

 Will the Nelder-Mead display such properties? It seems so to me, but I
 don't trust my quick read through of Wikipedia.

Yes, it does need a gradient and if the function is convex, it works
in a convex in the parameter space.

 I realize that maybe I should rephrase my question to try and draw more
 out of the common wealth of knowledge on this mailing list: what do
 people suggest to tackle this problem? Guided by Matthieu's suggestion, I
 have started looking at Powell's algorithm, and given the introduction of
 www.damtp.cam.ac.uk/user/na/NA_papers/NA2007_03.pdf I am wondering
 whether I should not investigate it. Can people provide any insights on
 these problems.

Indeed, Powell may also a solution. A simplex is just what is closer
to what you hinted as an optimization algorithm.

 Many thanks,

You're welcome ;)

 Gael

 PS: The reason I am looking at this optimization problem is that I got
 tired of looking at grid searches optimize the cross-validation score on
 my 3-parameter estimator (not in the scikit-learn, because it is way too
 specific to my problems).

Perhaps you may want to combine it with genetic algorithms. We also
kind of combine grid search with simplex-based optimizer with
simulated annealing in some of our global optimization problems, and I
think I'll try at one point to introduce genetic algorithms instead of
the grid search. Your problem is simpler though if it displays some
convexity.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-22 Thread Matthieu Brucher
2010/11/22 Gael Varoquaux gael.varoqu...@normalesup.org:
 On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
 It seems that a simplex is what you need.

 Ha! I am learning new fancy words. Now I can start looking clever.

  I realize that maybe I should rephrase my question to try and draw more
  out of the common wealth of knowledge on this mailing list: what do
  people suggest to tackle this problem? Guided by Matthieu's suggestion, I
  have started looking at Powell's algorithm, and given the introduction of
  www.damtp.cam.ac.uk/user/na/NA_papers/NA2007_03.pdf I am wondering
  whether I should not investigate it. Can people provide any insights on
  these problems.

 Indeed, Powell may also a solution. A simplex is just what is closer
 to what you hinted as an optimization algorithm.

 I'll do a bit more reading.

  PS: The reason I am looking at this optimization problem is that I got
  tired of looking at grid searches optimize the cross-validation score on
  my 3-parameter estimator (not in the scikit-learn, because it is way too
  specific to my problems).

 Perhaps you may want to combine it with genetic algorithms. We also
 kind of combine grid search with simplex-based optimizer with
 simulated annealing in some of our global optimization problems, and I
 think I'll try at one point to introduce genetic algorithms instead of
 the grid search.

 Well, in the scikit, in the long run (it will take a little while) I'd
 like to expose other optimization methods then the GridSearchCV, so if
 you have code or advice to give us, we'd certainly be interested.

 Gael

There is scikits.optimization partly in the externals :D But I don't
think they should be in scikits.learn directly. Of course, the scikit
may need access to some global optimization methods, but the most used
one is already there (the grid search).
Then for genetic algorithms, pyevolve is pretty much all you want (I
still have to check the multiprocessing part)

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] N dimensional dichotomy optimization

2010-11-22 Thread Matthieu Brucher
2010/11/22 Gael Varoquaux gael.varoqu...@normalesup.org:
 On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
 It seems that a simplex is what you need. It uses the barycenter (more
 or less) to find a new point in the simplex. And it works well only in
 convex functions (but in fact almost all functions have an issue with
 this :D)

 One last question, now that I know that what I am looking for is a
 simplex algorithm (it indeed corresponds to what I was after), is there a
 reason not to use optimize.fmin? It implements a Nelder-Mead. I must
 admit that I don't see how I can use it to specify the convex hull of the
 parameters in which it operates, or restrict it to work only on integers,
 which are two things that I may want to do.

optimize.fmin can be enough, I don't know it well enough. Nelder-Mead
is not a constrained optimization algorithm, so you can't specify an
outer hull. As for the integer part, I don't know if optimize.fmin is
type consistent, I don't know if scikits.optimization is either, but I
can check it. It should, as there is nothing impeding it.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Precision difference between dot and sum

2010-11-02 Thread Matthieu Brucher
 It would be great if someone could let me know why this happens.

 They don't use the same implementation, so such tiny differences are
 expected - having exactly the same solution would have been surprising,
 actually. You may be surprised about the difference for such a trivial
 operation, but keep in mind that dot is implemented with highly
 optimized CPU instructions (that is if you use ATLAS or similar library).

Also, IIRC, 1.0 cannot be represented exactly as a float, so the dot
way may be more wrong than the sum way.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dot() performance depends on data?

2010-09-11 Thread Matthieu Brucher
Denormal numbers are a tricky beast. You may have to change the clip
or the shift depending on the processor you have.
It's no wonder that processors and thus compilers have options to
round denormals to zero.

Matthieu

2010/9/11 Hagen Fürstenau ha...@zhuliguan.net:
 Anyway, seems it is indeed a denormal issue, as adding a small (1e-10)
 constant gives same speed for both timings.

 With adding 1e-10 or clipping to 0 at 1e-150, I still get a slowdown of
 about 30% compared with the random arrays. Any explanation for that?

 Cheers,
 Hagen


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion





-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Equivalent Matlab function

2010-09-02 Thread Matthieu Brucher
Hi,

I'm looking for a Numpy equivalent of convmtx
(http://www.mathworks.in/access/helpdesk/help/toolbox/signal/convmtx.html).
Is there something inside Numpy directly? or perhaps Scipy?

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Equivalent Matlab function

2010-09-02 Thread Matthieu Brucher
Thanks Joseph, I'll wrap this inside my code ;)

Matthieu

2010/9/2  josef.p...@gmail.com:
 On Thu, Sep 2, 2010 at 3:56 AM, Matthieu Brucher
 matthieu.bruc...@gmail.com wrote:
 Hi,

 I'm looking for a Numpy equivalent of convmtx
 (http://www.mathworks.in/access/helpdesk/help/toolbox/signal/convmtx.html).
 Is there something inside Numpy directly? or perhaps Scipy?

 I haven't seen it in numpy or scipy, but I have seen something similar
 in a PDE or FEM package.

 Using the toeplitz hint from the matlab help, the following seems to
 work, at least in the examples

 from scipy import linalg

 h = np.array([1,3,1])
 nc = 10; nr = nc-len(h)+1
 linalg.toeplitz(np.r_[[1],np.zeros(nr-1)], np.r_[h,np.zeros(nc-len(h))])
 array([[ 1.,  3.,  1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  1.,  3.,  1.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  1.,  3.,  1.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  1.,  3.,  1.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  1.,  3.,  1.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  1.,  3.,  1.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  1.,  3.,  1.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  1.,  3.,  1.]])

 h = np.array([1,2,3,2,1])
 nc = 10; nr = nc-len(h)+1
 linalg.toeplitz(np.r_[[1],np.zeros(nr-1)], np.r_[h,np.zeros(nc-len(h))])
 array([[ 1.,  2.,  3.,  2.,  1.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  1.,  2.,  3.,  2.,  1.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  1.,  2.,  3.,  2.,  1.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  1.,  2.,  3.,  2.,  1.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  1.,  2.,  3.,  2.,  1.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  1.,  2.,  3.,  2.,  1.]])

 maybe worth adding to the other special matrix improvements that Warren did.

 Josef


 Matthieu
 --
 Information System Engineer, Ph.D.
 Blog: http://matt.eifelle.com
 LinkedIn: http://www.linkedin.com/in/matthieubrucher
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] longdouble (float96) literals

2010-08-18 Thread Matthieu Brucher
I don't think there is longdouble on Windows, is there?

Matthieu

2010/8/18  josef.p...@gmail.com:
 On Wed, Aug 18, 2010 at 10:36 AM, Charles R Harris
 charlesr.har...@gmail.com wrote:


 On Wed, Aug 18, 2010 at 8:25 AM, Colin Macdonald macdon...@maths.ox.ac.uk
 wrote:

 On 08/18/10 15:14, Charles R Harris wrote:
  However, the various constants supplied by numpy, pi and such, are
  full precision.

 no, they are not.  My example demonstrated that numpy.pi is only
 double precision.


 Hmm, the full precision values are available internally but it looks like
 they aren't available to the public. I wonder what the easiest way to
 provide them would be? Maybe they should be long double types by default?

 playing with some examples, I don't seem to be able to do anything
 with longdouble on win32, py2.5

 np.array([3141592653589793238L], np.int64).astype(np.longdouble)[0]
 3141592653589793300.0
 np.array([3141592653589793238L], np.int64).astype(float)[0]
 3.1415926535897933e+018
 1./np.array([np.pi],np.longdouble)[0] - 1/np.pi
 0.0
 1./np.array([np.pi],np.longdouble)[0]
 0.31830988618379069

 and it doesn't look like it's the print precision
 1./np.array([np.pi],np.longdouble)[0]*1e18
 318309886183790720.0
 1./np.array([np.pi],float)[0]*1e18
 3.1830988618379072e+017


 type conversion and calculations seem to go through float

 Josef


 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Installing numpy with MKL

2010-08-05 Thread Matthieu Brucher
 I've been having a similar problem compiling NumPy with MKL on a cluster with 
 a site-wide license. Dag's site.cfg fails to config if I use 'iomp5' in it, 
 since (at least with this version, 11.1) libiomp5 is located in

        /scinet/gpc/intel/Compiler/11.1/072/lib/intel64/

 whereas the actual proper MKL

        /scinet/gpc/intel/Compiler/11.1/072/mkl/lib/em64t/

 I've tried putting both in my library_dirs separated by a colon as is 
 suggested by the docs, but python setup.py config fails to find MKL in this 
 case. Has anyone else run into this issue?

Indeed, this is a issue I also faced. I had to copy all the libs in
one folder. If one has only MKL, iomp5 is provided (as well as guide
IIRC), but with the Compiler pack, they are not in the MKL lib folder.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Installing numpy with MKL

2010-08-04 Thread Matthieu Brucher
2010/8/4 Søren Gammelmark gammelm...@phys.au.dk:


 I wouldn't know for sure, but could this be related to changes to the
 gcc compiler in Fedora 13 (with respect to implicit DSO linking) or
 would that only be an issue at build-time?

 http://fedoraproject.org/w/index.php?title=UnderstandingDSOLinkChange

 I'm not entirely sure I understand the link, but if it has anything to
 do with the compiler it seems to me that it should be the Intel
 compiler. The python I use is compiled with GCC but everything in numpy
 is done with the Intel compilers. Shouldn't it then be something with
 the Intel compilers?

 /Søren

Unfortunately, I think you'll ahve to use Dag's patch. MKL has a
specific loading procedure since a few releases, you have to abide by
it.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem with importing numpy in Ubuntu

2010-07-27 Thread Matthieu Brucher
Which version of Python are you actually using in this example?

Matthieu

2010/7/27 Robert Faryabi robert.fary...@gmail.com:
 I am new to numpy. Hopefully this is a correct forum to post my question.

 I have Ubuntu Luci system. I installed Python 2.6.5 and Python 3.0 as well
 as python-numpy using Ubuntu repository.
 When I import the numpy into python, I get the following error.

 import numpy
 Traceback (most recent call last):
   File stdin, line 1, in module
 ImportError: No module named numpy

 The package cannot be located.

 Then I tried to point the interpreter to the numpy

 sys.path.append('/usr/lib/
 python2.6/dist-packages')

 and import it

 import numpy

 I get the following error

 import numpy
 Traceback (most recent call last):
   File stdin, line 1, in module
   File /usr/lib/python2.6/dist-packages/numpy/__init__.py, line 130, in
 module
     import add_newdocs
   File /usr/lib/python2.6/dist-packages/numpy/add_newdocs.py, line 9, in
 module
     from lib import add_newdoc
   File /usr/lib/python2.6/dist-packages/numpy/lib/__init__.py, line 4, in
 module
     from type_check import *
   File /usr/lib/python2.6/dist-packages/numpy/lib/type_check.py, line 8,
 in module
     import numpy.core.numeric as _nx
   File /usr/lib/python2.6/dist-packages/numpy/core/__init__.py, line 5, in
 module
     import multiarray
 ImportError: /usr/lib/python2.6/dist-packages/numpy/core/multiarray.so:
 undefined symbol: _PyUnicodeUCS4_IsWhitespace



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion





-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem with importing numpy in Ubuntu

2010-07-27 Thread Matthieu Brucher
Python 2.6.5 from Ubuntu?
I tried the same yesterday evening, and it worked like a charm.

Matthieu

2010/7/27 Robert Faryabi robert.fary...@gmail.com:
 I am using 2.5.6

 Python 2.6.5 (r265:79063, Jun 28 2010, 20:31:28)
 [GCC 4.4.3] on linux2



 On Tue, Jul 27, 2010 at 9:51 AM, Matthieu Brucher
 matthieu.bruc...@gmail.com wrote:

 Which version of Python are you actually using in this example?

 Matthieu

 2010/7/27 Robert Faryabi robert.fary...@gmail.com:
  I am new to numpy. Hopefully this is a correct forum to post my
  question.
 
  I have Ubuntu Luci system. I installed Python 2.6.5 and Python 3.0 as
  well
  as python-numpy using Ubuntu repository.
  When I import the numpy into python, I get the following error.
 
  import numpy
  Traceback (most recent call last):
    File stdin, line 1, in module
  ImportError: No module named numpy
 
  The package cannot be located.
 
  Then I tried to point the interpreter to the numpy
 
  sys.path.append('/usr/lib/
  python2.6/dist-packages')
 
  and import it
 
  import numpy
 
  I get the following error
 
  import numpy
  Traceback (most recent call last):
    File stdin, line 1, in module
    File /usr/lib/python2.6/dist-packages/numpy/__init__.py, line 130,
  in
  module
      import add_newdocs
    File /usr/lib/python2.6/dist-packages/numpy/add_newdocs.py, line 9,
  in
  module
      from lib import add_newdoc
    File /usr/lib/python2.6/dist-packages/numpy/lib/__init__.py, line 4,
  in
  module
      from type_check import *
    File /usr/lib/python2.6/dist-packages/numpy/lib/type_check.py, line
  8,
  in module
      import numpy.core.numeric as _nx
    File /usr/lib/python2.6/dist-packages/numpy/core/__init__.py, line
  5, in
  module
      import multiarray
  ImportError: /usr/lib/python2.6/dist-packages/numpy/core/multiarray.so:
  undefined symbol: _PyUnicodeUCS4_IsWhitespace
 
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 



 --
 Information System Engineer, Ph.D.
 Blog: http://matt.eifelle.com
 LinkedIn: http://www.linkedin.com/in/matthieubrucher
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion





-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem with importing numpy in Ubuntu

2010-07-27 Thread Matthieu Brucher
It's a problem of compilation of Python and numpy with different
parameters. But I've tried the same yesterday, and the Ubuntu
repository are OK in that respect, so there is something not quite
right with your configuration.

Matthieu

2010/7/27 Robert Faryabi robert.fary...@gmail.com:
 I can see the numpy now, but I have the problem with a shared library.
 Here is the error

 import numpy
 Traceback (most recent call last):
   File stdin, line 1, in module
   File /usr/lib/python2.6/dist-packages/numpy/__init__.py, line 130, in
 module
     import add_newdocs
   File /usr/lib/python2.6/dist-packages/numpy/add_newdocs.py, line 9, in
 module
     from lib import add_newdoc
   File /usr/lib/python2.6/dist-packages/numpy/lib/__init__.py, line 4, in
 module
     from type_check import *
   File /usr/lib/python2.6/dist-packages/numpy/lib/type_check.py, line 8,
 in module
     import numpy.core.numeric as _nx
   File /usr/lib/python2.6/dist-packages/numpy/core/__init__.py, line 5, in
 module
     import multiarray
 ImportError: /usr/lib/python2.6/dist-packages/numpy/core/multiarray.so:
 undefined symbol: _PyUnicodeUCS4_IsWhitespace


 Do you have any idea? It seems that the UCS4 and UCS2 are related to 16 and
 8 bit unicode.


 On Tue, Jul 27, 2010 at 10:16 AM, Charles R Harris
 charlesr.har...@gmail.com wrote:


 On Tue, Jul 27, 2010 at 7:46 AM, Robert Faryabi robert.fary...@gmail.com
 wrote:

 I am new to numpy. Hopefully this is a correct forum to post my question.

 I have Ubuntu Luci system. I installed Python 2.6.5 and Python 3.0 as
 well as python-numpy using Ubuntu repository.
 When I import the numpy into python, I get the following error.

  import numpy
 Traceback (most recent call last):
   File stdin, line 1, in module
 ImportError: No module named numpy

 The package cannot be located.

 Then I tried to point the interpreter to the numpy

  sys.path.append('/usr/lib/
 python2.6/dist-packages')


 I use an install.pth file

 $char...@ubuntu ~$ cat ~/.local/lib/python2.6/site-packages/install.pth
 /usr/local/lib/python2.6/dist-packages

 You will need to create the .local directory and its subdirectories. Don't
 use Python 3.0, use 3.1 or greater if you want to experiment.

 snip

 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion





-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem with importing numpy in Ubuntu

2010-07-27 Thread Matthieu Brucher
What does which python return?

2010/7/27 Robert Faryabi robert.fary...@gmail.com:
 I'm getting the same

 sys.maxunicode
 65535

 I might have some hand complied python. Once I compiled Biopython long
 ago.

 The problem is I do not know how to clean up all the python version that I
 have. I tried the reinstall option. It does not work. I cannot remove the
 python. It will wipe out my operating system.

 Any suggestion?




 On Tue, Jul 27, 2010 at 10:42 AM, Sebastian Haase seb.ha...@gmail.com
 wrote:

 The origin of this problem is the fact that Python supports (at least)
 2 types of Unicode:
 2 bytes and/or 4 bytes per character.

 Additionally, for some incomprehensible reason the Python source code
 (as downloaded from python.org) defaults to 2ByteUnicode whereas
 all (major) Linux distributions default to 4ByteUnicode.

 ( check  sys.maxunicode   to see what you have; I get  1114111, i.e
 65535 , so I have 4 byte (on Debian) )

 So, most likely you have some hand compiled Python somewhere 

 - Sebastian Haase



 On Tue, Jul 27, 2010 at 4:33 PM, Matthieu Brucher
 matthieu.bruc...@gmail.com wrote:
  It's a problem of compilation of Python and numpy with different
  parameters. But I've tried the same yesterday, and the Ubuntu
  repository are OK in that respect, so there is something not quite
  right with your configuration.
 
  Matthieu
 
  2010/7/27 Robert Faryabi robert.fary...@gmail.com:
  I can see the numpy now, but I have the problem with a shared library.
  Here is the error
 
  import numpy
  Traceback (most recent call last):
    File stdin, line 1, in module
    File /usr/lib/python2.6/dist-packages/numpy/__init__.py, line 130,
  in
  module
      import add_newdocs
    File /usr/lib/python2.6/dist-packages/numpy/add_newdocs.py, line 9,
  in
  module
      from lib import add_newdoc
    File /usr/lib/python2.6/dist-packages/numpy/lib/__init__.py, line
  4, in
  module
      from type_check import *
    File /usr/lib/python2.6/dist-packages/numpy/lib/type_check.py, line
  8,
  in module
      import numpy.core.numeric as _nx
    File /usr/lib/python2.6/dist-packages/numpy/core/__init__.py, line
  5, in
  module
      import multiarray
  ImportError: /usr/lib/python2.6/dist-packages/numpy/core/multiarray.so:
  undefined symbol: _PyUnicodeUCS4_IsWhitespace
 
 
  Do you have any idea? It seems that the UCS4 and UCS2 are related to 16
  and
  8 bit unicode.
 
 
  On Tue, Jul 27, 2010 at 10:16 AM, Charles R Harris
  charlesr.har...@gmail.com wrote:
 
 
  On Tue, Jul 27, 2010 at 7:46 AM, Robert Faryabi
  robert.fary...@gmail.com
  wrote:
 
  I am new to numpy. Hopefully this is a correct forum to post my
  question.
 
  I have Ubuntu Luci system. I installed Python 2.6.5 and Python 3.0 as
  well as python-numpy using Ubuntu repository.
  When I import the numpy into python, I get the following error.
 
   import numpy
  Traceback (most recent call last):
    File stdin, line 1, in module
  ImportError: No module named numpy
 
  The package cannot be located.
 
  Then I tried to point the interpreter to the numpy
 
   sys.path.append('/usr/lib/
  python2.6/dist-packages')
 
 
  I use an install.pth file
 
  $char...@ubuntu ~$ cat
  ~/.local/lib/python2.6/site-packages/install.pth
  /usr/local/lib/python2.6/dist-packages
 
  You will need to create the .local directory and its subdirectories.
  Don't
  use Python 3.0, use 3.1 or greater if you want to experiment.
 
  snip
 
  Chuck
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 
 
 
  --
  Information System Engineer, Ph.D.
  Blog: http://matt.eifelle.com
  LinkedIn: http://www.linkedin.com/in/matthieubrucher
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion





-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy for Python 3?

2010-07-19 Thread Matthieu Brucher
 Dave, I got:
 c:\SVNRepository\numpyC:\Python31python setup.py bdist_wininst
 'C:\Python31' is not recognized as an internal or external command,
 operable program or batch file.

 Or didn't I do exactly what you suggested?

python setup.py bdist_wininst

 Assuming you have a C compiler on your system (and in your path)

 I'm afraid I have no idea, nor how to find out.

I'm afraid that if you don't know if you have a compiler, you don't
have one. This also means you will not be able to compile Numpy, as
the official compiler is no longer available.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release candidate 3 for NumPy 1.4.1 and SciPy 0.7.2

2010-04-19 Thread Matthieu Brucher
Hi,

I'm trying to compile scipy with ICC (numpy got through correctly),
but I have issue with infinites in cephes:

icc: scipy/special/cephes/const.c
scipy/special/cephes/const.c(94): error: floating-point operation
result is out of range
  double INFINITY = 1.0/0.0;  /* 99e999; */
   ^

scipy/special/cephes/const.c(99): error: floating-point operation
result is out of range
  double NAN = 1.0/0.0 - 1.0/0.0;
  ^

scipy/special/cephes/const.c(99): error: floating-point operation
result is out of range
  double NAN = 1.0/0.0 - 1.0/0.0;
^

compilation aborted for scipy/special/cephes/const.c (code 2)
scipy/special/cephes/const.c(94): error: floating-point operation
result is out of range
  double INFINITY = 1.0/0.0;  /* 99e999; */
   ^

scipy/special/cephes/const.c(99): error: floating-point operation
result is out of range
  double NAN = 1.0/0.0 - 1.0/0.0;
  ^

scipy/special/cephes/const.c(99): error: floating-point operation
result is out of range
  double NAN = 1.0/0.0 - 1.0/0.0;
^

compilation aborted for scipy/special/cephes/const.c (code 2)

Matthieu

2010/4/19 Sebastian Haase seb.ha...@gmail.com:
 Hi,
 Congratulations. I might be unnecessarily dense - but what SciPy am I
 supposed to use with the new numpy 1.4.1 for Python 2.5? I'm surprised
 that there are no SciPy 0.7.2 binaries for Python 2.5 - is that
 technically not possible ?

 Thanks,
 Sebastian Haase

 On Mon, Apr 19, 2010 at 6:25 AM, Ralf Gommers
 ralf.gomm...@googlemail.com wrote:
 Hi,

 I am pleased to announce the third release candidate of both Scipy 0.7.2 and
 NumPy 1.4.1. Please test, and report any problems on the NumPy or SciPy
 list.

 Binaries, sources and release notes can be found at
 https://sourceforge.net/projects/numpy/files/
 https://sourceforge.net/projects/scipy/files/


 Changes from RC2
 ==
 SciPy: warnings about possible binary incompatibilities with numpy have been
 suppressed
 NumPy: - fixed compatibility with Python 2.7b1
    - marked test for complex log as a known failure


 NumPy 1.4.1
 ==
 The main change over 1.4.0 is that datetime support has been removed. This
 fixes the binary incompatibility issues between NumPy and other libraries
 like SciPy and Matplotlib.

 There are also a number of other bug fixes, and no new features.

 Binaries for Python 2.5 and 2.6 are available for both Windows and OS X.


 SciPy 0.7.2
 =
 The only change compared to 0.7.1 is that the C sources for Cython code have
 been regenerated with Cython 0.12.1. This ensures that SciPy 0.7.2 will work
 with NumPy 1.4.1, while also retaining backwards compatibility with NumPy
 1.3.0.

 Note that the 0.7.x branch was created in January 2009, so a lot of fixes
 and new functionality in current trunk is not present in this release.

 Binaries for Python 2.6 are available for both Windows and OS X. Due to the
 age of the code no binaries for Python 2.5 are available.


 On behalf of the NumPy and SciPy developers,
 Ralf

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Release candidate 3 for NumPy 1.4.1 and SciPy 0.7.2

2010-04-19 Thread Matthieu Brucher
BTW, there still is an error with ifort, so scipy is still
incompatible with the Intel compilers (which is at least very sad...)

Matthieu

2010/4/19 Matthieu Brucher matthieu.bruc...@gmail.com:
 Hi,

 I'm trying to compile scipy with ICC (numpy got through correctly),
 but I have issue with infinites in cephes:

 icc: scipy/special/cephes/const.c
 scipy/special/cephes/const.c(94): error: floating-point operation
 result is out of range
  double INFINITY = 1.0/0.0;  /* 99e999; */
                       ^

 scipy/special/cephes/const.c(99): error: floating-point operation
 result is out of range
  double NAN = 1.0/0.0 - 1.0/0.0;
                  ^

 scipy/special/cephes/const.c(99): error: floating-point operation
 result is out of range
  double NAN = 1.0/0.0 - 1.0/0.0;
                            ^

 compilation aborted for scipy/special/cephes/const.c (code 2)
 scipy/special/cephes/const.c(94): error: floating-point operation
 result is out of range
  double INFINITY = 1.0/0.0;  /* 99e999; */
                       ^

 scipy/special/cephes/const.c(99): error: floating-point operation
 result is out of range
  double NAN = 1.0/0.0 - 1.0/0.0;
                  ^

 scipy/special/cephes/const.c(99): error: floating-point operation
 result is out of range
  double NAN = 1.0/0.0 - 1.0/0.0;
                            ^

 compilation aborted for scipy/special/cephes/const.c (code 2)

 Matthieu

 2010/4/19 Sebastian Haase seb.ha...@gmail.com:
 Hi,
 Congratulations. I might be unnecessarily dense - but what SciPy am I
 supposed to use with the new numpy 1.4.1 for Python 2.5? I'm surprised
 that there are no SciPy 0.7.2 binaries for Python 2.5 - is that
 technically not possible ?

 Thanks,
 Sebastian Haase

 On Mon, Apr 19, 2010 at 6:25 AM, Ralf Gommers
 ralf.gomm...@googlemail.com wrote:
 Hi,

 I am pleased to announce the third release candidate of both Scipy 0.7.2 and
 NumPy 1.4.1. Please test, and report any problems on the NumPy or SciPy
 list.

 Binaries, sources and release notes can be found at
 https://sourceforge.net/projects/numpy/files/
 https://sourceforge.net/projects/scipy/files/


 Changes from RC2
 ==
 SciPy: warnings about possible binary incompatibilities with numpy have been
 suppressed
 NumPy: - fixed compatibility with Python 2.7b1
    - marked test for complex log as a known failure


 NumPy 1.4.1
 ==
 The main change over 1.4.0 is that datetime support has been removed. This
 fixes the binary incompatibility issues between NumPy and other libraries
 like SciPy and Matplotlib.

 There are also a number of other bug fixes, and no new features.

 Binaries for Python 2.5 and 2.6 are available for both Windows and OS X.


 SciPy 0.7.2
 =
 The only change compared to 0.7.1 is that the C sources for Cython code have
 been regenerated with Cython 0.12.1. This ensures that SciPy 0.7.2 will work
 with NumPy 1.4.1, while also retaining backwards compatibility with NumPy
 1.3.0.

 Note that the 0.7.x branch was created in January 2009, so a lot of fixes
 and new functionality in current trunk is not present in this release.

 Binaries for Python 2.6 are available for both Windows and OS X. Due to the
 age of the code no binaries for Python 2.5 are available.


 On behalf of the NumPy and SciPy developers,
 Ralf

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




 --
 Information System Engineer, Ph.D.
 Blog: http://matt.eifelle.com
 LinkedIn: http://www.linkedin.com/in/matthieubrucher




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] size of a specific dimension of a numpy array

2010-03-17 Thread Matthieu Brucher
Hi,

A.shape[1]

2010/3/17 gerardo.berbeglia gberbeg...@gmail.com:

 I would like to know a simple way to know the size of a given dimension of a
 numpy array.

 Example
 A = numpy.zeros((10,20,30),float)
 The size of the second dimension of the array A is 20.

 Thanks.




 --
 View this message in context: 
 http://old.nabble.com/size-of-a-specific-dimension-of-a-numpy-array-tp27933090p27933090.html
 Sent from the Numpy-discussion mailing list archive at Nabble.com.

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Calling routines from a Fortran library using python

2010-02-18 Thread Matthieu Brucher
 You may have to convert the .a library to a .so library.

And this is where I hope that the library is compiled with fPIC (which
is generally not the case for static libraries). If it is not the
case, you will not be able to compile it as a shared library and thus
not be able to use it from Python :|

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Calling routines from a Fortran library using python

2010-02-18 Thread Matthieu Brucher
 Ok I have extracted the *.o files from the static library.

 Applying the file command to the object files yields

 ELF 64-bit LSB relocatable, AMD x86-64, version 1 (SYSV),
 not stripped

 What's that supposed to mean ?

It means that each object file is an object file compiled with -fPIC,
so you just have to make a shared library (gfortran -shared *.o -o
libmysharedlibrary.so)

Then, you can try to open the library with ctypes. If something is
lacking, you may have to add -lsome_library to the gfortran line.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Calling routines from a Fortran library using python

2010-02-18 Thread Matthieu Brucher
If header files are provided, the work done by f2py is almost done.
But you don't know the real Fortran interface, so you still have to
use ctypes over f2py.

Matthieu

2010/2/18 George Nurser gnur...@googlemail.com:
 Hi Nils,
 I've not tried it, but you might be able to interface with f2py your
 own fortran subroutine that calls the library.
 Then issue the f2py command with extra arguments -llibname
 -Ldirectory with lib.

 See section 5 of
 http://cens.ioc.ee/projects/f2py2e/usersguide/index.html#command-f2py

 --George.


 On 18 February 2010 09:18, Nils Wagner nwag...@iam.uni-stuttgart.de wrote:
 Hi all,

 I have a static  library (*.a) compiled by gfortran but no
 source files.
 How can I call routines from that library using python ?

 Any pointer would be appreciated.

 Thanks in advance.

                                                  Nils
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Calling routines from a Fortran library using python

2010-02-18 Thread Matthieu Brucher
If Nils has no access to the Fortran interface (and I don't think he
has, unless there is some .mod file somewhere?), he shouldn't use
f2py. Even if you know that the Fortran routine is named XXX, you
don't know how the arguments must be given. Addressing the C interface
directly is much safer.

Matthieu

2010/2/18 George Nurser gnur...@googlemail.com:
 I'm suggesting writing a *new* Fortran interface, coupled with f2py.
 The original library just needs to be linked to the new .so generated
 by f2py. I am hoping (perhaps optimistically) that can be done in the
 Fortran compilation...

 --George.

 On 18 February 2010 10:56, Matthieu Brucher matthieu.bruc...@gmail.com 
 wrote:
 If header files are provided, the work done by f2py is almost done.
 But you don't know the real Fortran interface, so you still have to
 use ctypes over f2py.

 Matthieu

 2010/2/18 George Nurser gnur...@googlemail.com:
 Hi Nils,
 I've not tried it, but you might be able to interface with f2py your
 own fortran subroutine that calls the library.
 Then issue the f2py command with extra arguments -llibname
 -Ldirectory with lib.

 See section 5 of
 http://cens.ioc.ee/projects/f2py2e/usersguide/index.html#command-f2py

 --George.


 On 18 February 2010 09:18, Nils Wagner nwag...@iam.uni-stuttgart.de wrote:
 Hi all,

 I have a static  library (*.a) compiled by gfortran but no
 source files.
 How can I call routines from that library using python ?

 Any pointer would be appreciated.

 Thanks in advance.

                                                  Nils
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




 --
 Information System Engineer, Ph.D.
 Blog: http://matt.eifelle.com
 LinkedIn: http://www.linkedin.com/in/matthieubrucher
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Calling routines from a Fortran library using python

2010-02-18 Thread Matthieu Brucher
2010/2/18 Christopher Barker chris.bar...@noaa.gov:
 Dag Sverre Seljebotn wrote:
 If it is not compiled with -fPIC, you can't statically link it into any
 shared library, it has to be statically linked into the final executable
 (so the standard /usr/bin/python will never work).

 Shows you what I (don't) know!

 The joys of closed-source software!

 On a similar topic -- is it possible to convert a *.so to a static lib?
 (on OS-X)? I did a bunch a googling a while back, and couldn't figure it
 out.

I don't think you can. A static library is nothing more than an
archive of object files (a Fortran module file is the same BTW), a
dynamic library is one big object with every link created. Going from
the latter to the former cannot be easilly done.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposed fix for MKL and dynamic loading

2010-01-22 Thread Matthieu Brucher
 [1] BTW, I could not figure out how to link statically if I wanted -- is
 search_static_first = 1 supposed to work? Perhaps MKL will insist on
 loading some parts dynamically even then *shrug*.

 search_static_first is inherently fragile - using the linker to do this
 is much better (with -WL,-Bshared/-Wl,-Bstatic flags).

How do you write the site.cfg accordingly?

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposed fix for MKL and dynamic loading

2010-01-21 Thread Matthieu Brucher
 try:
    import sys
    import ctypes
    _old_rtld = sys.getdlopenflags()
    sys.setdlopenflags(_old_rtld|ctypes.RTLD_GLOBAL)
    from numpy.linalg import lapack_lite
 finally:
    sys.setdlopenflags(_old_rtld)
    del sys; del ctypes; del _old_rtld

This also applies to scipy code that relies on BLAS as well. Lisandra
Dalcin gave me a tip that is close to this one some months ago
(http://matt.eifelle.com/2008/11/03/i-used-the-latest-mkl-with-numpy-and.../).
The best official solution is to statically link against the MKL with
Python.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposed fix for MKL and dynamic loading

2010-01-21 Thread Matthieu Brucher
2010/1/21 Dag Sverre Seljebotn da...@student.matnat.uio.no:
 Matthieu Brucher wrote:
 try:
    import sys
    import ctypes
    _old_rtld = sys.getdlopenflags()
    sys.setdlopenflags(_old_rtld|ctypes.RTLD_GLOBAL)
    from numpy.linalg import lapack_lite
 finally:
    sys.setdlopenflags(_old_rtld)
    del sys; del ctypes; del _old_rtld


 This also applies to scipy code that relies on BLAS as well. Lisandra
 Dalcin gave me a tip that is close to this one some months ago
 (http://matt.eifelle.com/2008/11/03/i-used-the-latest-mkl-with-numpy-and.../).
 The best official solution is to statically link against the MKL with
 Python.


 IIUC, it should be enough to load the .so-s in GLOBAL mode once. So it
 is probably enough to ensure NumPy is patched in a way so that SciPy
 loads NumPy which loads the .so-s in GLOBAL mode, so that a seperate
 patch for SciPy is not necesarry. (Remains to be tried, I'm moving on to
 building SciPy now.)

Indeed, it should be enough.

 As for static linking, do you mean linking MKL into the Python
 interpreter itself? Or statically linking with NumPy?

statically linking with numpy. This is what was advised to me by Intel.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Waf or scons/numscons for a C/Fortran/Cython/Python project -- what's your recommendation?

2010-01-16 Thread Matthieu Brucher
Hi,

SCons can also do configuration and installation steps. David made it
possible to use SCons capabilities from distutils, but you can still
make a C/Fortran/Cython/Python project with SCons.

Matthieu

2010/1/16 Kurt Smith kwmsm...@gmail.com:
 My questions here concern those familiar with configure/build/install
 systems such as distutils, setuptools, scons/numscons or waf
 (particularly David Cournapeau).

 I'm creating a tool known as 'fwrap' that has a component that needs
 to do essentially what f2py does now -- take fortran source code and
 compile it into a python extension module.  It uses Cython to create
 the extension module, and the current configure/build/install system
 is a very kludgy monkeypatched Cython.distutils and numpy.distutils
 setup.py script.  The setup.py script works for testing on my system
 here, but for going prime time, I dread using it.  David has made his
 critiques of distutils known for scientific software, and I agree.
 What's the best alternative?

 More specifically: what are the pros/cons between waf and
 scons/numscons for configure/build/install of a
 Fortran-C-Cython-Python project?

 Is scons capable of handling the configure and install stages, or is
 it only a build system?  As I understand it, numscons is called from
 distutils; distutils handles the configure/install stages.
 Scons/numscons have more fortran support that waf, from what I can
 see.  The main downside of using scons is that I'd still have to mess
 around with distutils.

 It looks like waf has explicit support for all three stages, and could
 be just what I'm looking for.  David has a few threads on the
 waf-users list about getting fortran working with waf.  Has that
 progressed much?  I want to contribute to this, for the benefit of
 scipy and my project, and to limit duplicated work.  From what I
 gather, the fortran configuration stuff in numscons is separated
 nicely from the scon-specific stuff :-)  Would it be a matter of
 porting the numscons fortran stuff into waf?

 Any comments you have on using waf/scons for numerical projects would
 be welcome!

 Kurt
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Objected-oriented SIMD API for Numpy

2009-10-22 Thread Matthieu Brucher
 OK, I should have said Object-oriented SIMD API that is implemented
 using hardware SIMD instructions.

 No, I think you're right. Using SIMD to refer to numpy-like
 operations is an abuse of the term not supported by any outside
 community that I am aware of. Everyone else uses SIMD to describe
 hardware instructions, not the application of a single syntactical
 element of a high level language to a non-trivial data structure
 containing lots of atomic data elements.

I agree with Sturla, for instance nVidia GPUs do SIMD computations
with blocs of 16 values at a time, but the hardware behind can't
compute on so much data at a time. It's SIMD from our point of view,
just like Numpy does ;)

Matthieu
-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Objected-oriented SIMD API for Numpy

2009-10-21 Thread Matthieu Brucher
 Is it general, or just for simple operations in numpy and ufunc ? I
 remember that for music softwares, SIMD used to matter a lot, even for
 simple bus mixing (which is basically a ax+by with a, b scalars and x
 y the input arrays).

Indeed, it shouldn't :| I think the main reason might not be SIMD, but
the additional hypothesis you put on the arrays (aliasing). This way,
todays compilers may not even need the actual SIMD instructions.
I have the same opinion as Francesc, it would only be useful for
operations that need more computations that load/store.

Matthieu
-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] MKL with 64bit crashes

2009-10-15 Thread Matthieu Brucher
Hi,

You need to use the static libraries, are you sure you currently do?

Matthieu

2009/10/15 Kashyap Ashwin ashwin.kash...@thomson.net:
 I followed the advice given by the Intel MKL link adviser
 (http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/)

 This is my new site.cfg:
 mkl_libs = mkl_intel_ilp64, mkl_gnu_thread, mkl_core

 I also exported CFLAGS=-fopenmp and built with the --fcompiler=gnu95.
 Now I get these errors on import:
 Running unit tests for numpy
 NumPy version 1.3.0
 NumPy is installed in
 /opt/Personalization/lib/python2.5/site-packages/numpy
 Python version 2.5.2 (r252:60911, Jul 22 2009, 15:33:10) [GCC 4.2.4
 (Ubuntu 4.2.4-1ubuntu3)]
 nose version 0.11.0

 *** libmkl_mc.so *** failed with error : libmkl_mc.so: undefined symbol:
 mkl_dft_commit_descriptor_s_c2c_md_omp
 *** libmkl_def.so *** failed with error : libmkl_def.so: undefined
 symbol: mkl_dft_commit_descriptor_s_c2c_md_omp
 MKL FATAL ERROR: Cannot load neither libmkl_mc.so nor libmkl_def.so


 Any hints?

 Thanks,
 Ashwin



 Your message:

 On Thu, Oct 15, 2009 at 8:04 AM, Kashyap Ashwin
 ashwin.kash...@thomson.net wrote:
 Hello,
 I compiled numpy-1.3.0 from sources on Ubuntu-hardy, x86-64 (Intel)
 with
 MKL.
 This is my site.cfg:
 [mkl]
 # library_dirs = /opt/intel/mkl/10.0.1.014/lib/32/
 library_dirs = /opt/intel/mkl/10.2.2.025/lib/em64t
 include_dirs = /opt/intel/mkl/10.2.2.025/include
 lapack_libs = mkl_lapack
 #mkl_libs = mkl_core, guide, mkl_gf_ilp64, mkl_def, mkl_gnu_thread,
 iomp5, mkl_vml_mc3
 mkl_libs = guide, mkl_core, mkl_gnu_thread, iomp5, mkl_gf_ilp64,
 mkl_mc3, mkl_def

 The order does not look right - I don't know the exact order (each
 version of the MKL changes the libraries), but you should respect the
 order as given in the MKL manual.

 MKL ERROR: Parameter 4 was incorrect on entry to DGESV

 This suggests an error when passing argument to MKL - I believe your
 version of MKL uses the gfortran ABI by default, and hardy uses g77 as
 the default fortran compiler. You should either recompile everything
 with gfortran, or regenerate the MKL interface libraries with g77 (as
 indicated in the manual).

 cheers,

 David
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fwd: GPU Numpy

2009-09-10 Thread Matthieu Brucher
 Sure. Specially because NumPy is all about embarrasingly parallel problems
 (after all, this is how an ufunc works, doing operations
 element-by-element).

 The point is: are GPUs prepared to compete with a general-purpose CPUs in
 all-road operations, like evaluating transcendental functions, conditionals
 all of this with a rich set of data types? I would like to believe that this
 is the case, but I don't think so (at least not yet).

A lot of nVidia's SDK functions is not done on GPU. There are some
functions that they provide where the actual computation is done on
the CPU, not on the GPU (I don't have an example here, but nVidia's
forum is full of examples ;))

Matthieu
-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] snow leopard and Numeric

2009-09-01 Thread Matthieu Brucher
Use Numpy instead of Numeric (no longer supported I think)?

Matthieu

2009/9/1 Stefano Covino stefano_cov...@yahoo.it:
 Hello everybody,

 I have just upgraded my Mac laptop to snow leopard.
 However, I can no more compile Numeric 24.2.

 Here is my output:

 [MacBook-Pro-di-Stefano:~/Pacchetti/Numeric-24.2] covino% python
 setup.py build
 running build
 running build_py
 running build_ext
 building 'RNG.RNG' extension
 gcc-4.2 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -arch i386 -
 arch ppc -arch x86_64 -pipe -IInclude -IPackages/FFT/Include -
 IPackages/RNG/Include -I/System/Library/Frameworks/Python.framework/
 Versions/2.6/include/python2.6 -c Packages/RNG/Src/ranf.c -o build/
 temp.macosx-10.6-universal-2.6/Packages/RNG/Src/ranf.o
 Packages/RNG/Src/ranf.c: In function ‘Mixranf’:
 Packages/RNG/Src/ranf.c:153: error: conflicting types for ‘gettimeofday’
 /usr/include/sys/time.h:210: error: previous declaration of
 ‘gettimeofday’ was here
 Packages/RNG/Src/ranf.c: In function ‘Mixranf’:
 Packages/RNG/Src/ranf.c:153: error: conflicting types for ‘gettimeofday’
 /usr/include/sys/time.h:210: error: previous declaration of
 ‘gettimeofday’ was here
 Packages/RNG/Src/ranf.c: In function ‘Mixranf’:
 Packages/RNG/Src/ranf.c:153: error: conflicting types for ‘gettimeofday’
 /usr/include/sys/time.h:210: error: previous declaration of
 ‘gettimeofday’ was here
 lipo: can't open input file: /var/folders/x4/x4lrvHJWH68+aWExBjO5Gk++
 +TI/-Tmp-//ccDCDxtF.out (No such file or directory)
 error: command 'gcc-4.2' failed with exit status 1


 Is there anything I could do?


 Thanks a lot,
        Stefano
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Accelerating NumPy computations [Was: GPU Numpy]

2009-08-21 Thread Matthieu Brucher
 I personally think that, in general, exposing GPU capabilities directly
 to NumPy would provide little service for most NumPy users.  I rather
 see letting this task to specialized libraries (like PyCUDA, or special
 versions of ATLAS, for example) that can be used from NumPy.

 specialized library can be a good start as currently their is too much
 incertitude in the language(opencl vs nvidia api driver(pycuda, but not
 cublas, cufft,...) vs c-cuda(cublas, cufft))

Indeed. In the future, if OpenCL is the way to go, it may even be
helpful to have Numpy using OpenCL directly, as AMD provides an SDK
for OpenCL, and with Larrabee approaching, Intel will surely provide
one of its own.

Matthieu
-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fwd: GPU Numpy

2009-08-06 Thread Matthieu Brucher
2009/8/6 Erik Tollerud erik.tolle...@gmail.com:
 Note that this is from a user perspective, as I have no particular plan of
 developing the details of this implementation, but I've thought for a long
 time that GPU support could be great for numpy (I would also vote for OpenCL
 support over cuda, although conceptually they seem quite similar)...
 But  what exactly would the large-scale plan be?  One of the advantages of
 GPGPUs is that they are particularly suited to rather complicated
 paralellizable algorithms,

You mean simple parallizable algorithms, I suppose?

 and the numpy-level basic operations are just the
 simple arithmatic operations.  So while I'd love to see it working, it's
 unclear to me exactly how much is gained at the core numpy level, especially
 given that it's limited to single-precision on most GPUs.
 Now linear algebra or FFTs on a GPU would probably be a huge boon, I'll
 admit - especially if it's in the form of a drop-in replacement for the
 numpy or scipy versions.
 By the way, I noticed no one mentioned the GPUArray class in pycuda (and it
 looks like there's something similar in the pyopencl) - seems like that's
 already done a fair amount of the work...
 http://documen.tician.de/pycuda/array.html#pycuda.gpuarray.GPUArray


 On Thu, Aug 6, 2009 at 10:41 AM, James Bergstra bergs...@iro.umontreal.ca
 wrote:

 On Thu, Aug 6, 2009 at 1:19 PM, Charles R
 Harrischarlesr.har...@gmail.com wrote:
  I almost looks like you are reimplementing numpy, in c++ no less. Is
  there
  any reason why you aren't working with a numpy branch and just adding
  ufuncs?

 I don't know how that would work.  The Ufuncs need a datatype to work
 with, and AFAIK, it would break everything if a numpy ndarray pointed
 to memory on the GPU.  Could you explain what you mean a little more?

  I'm also curious if you have thoughts about how to use the GPU
  pipelines in parallel.

 Current thinking for ufunc type computations:
 1) divide up the tensors into subtensors whose dimensions have
 power-of-two sizes (this permits a fast integer - ndarray coordinate
 computation using bit shifting),
 2) launch a kernel for each subtensor in it's own stream to use
 parallel pipelines.
 3) sync and return.

 This is a pain to do without automatic code generation though.
 Currently we're using macros, but that's not pretty.
 C++ has templates, which we don't really use yet, but were planning on
 using.  These have some power to generate code.
 The 'theano' project (www.pylearn.org/theano) for which cuda-ndarray
 was created has a more powerful code generation mechanism similar to
 weave.   This algorithm is used in theano-cuda-ndarray.
 Scipy.weave could be very useful for generating code for specific
 shapes/ndims on demand, if weave could use nvcc.

 James
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion





-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] (newbie) How can I use NumPy to wrap my C++ class with 2-dimensional arrays?

2009-07-30 Thread Matthieu Brucher
Hi,

In fact, it's not that complicated. You may know the way how copying a
vector, and this is all you need
(http://matt.eifelle.com/2008/01/04/transforming-a-c-vector-into-a-numpy-array/).
You will have to copy your data, it is the safest way to ensure that
the data is always valid.
For the std::vector[], I suggest you convert it to a single vector,
as the data inside this array is not contiguous and it can thus be
cumbersome to create a Numpy array from that.

Once the structure is prepared, you have to allocate the dimensions,
but this may be available online.

In case you know that the C++ data won't go away before the Python
array, you can always wrap the container
(http://matt.eifelle.com/2008/11/04/exposing-an-array-interface-with-swig-for-a-cc-structure/)
with SWIG.

Matthieu

2009/7/30 Raymond de Vries ree...@zonnet.nl:
 Hi everyone,

 (I sent this message yesterday as well but somehow it didn't come
 through...)

 I would like to ask your advice how I can use NumPy to wrap my existing
 C++ library with 2-dimensional arrays. I am wrapping the library with
 swig and I have the typemap function declaration. But now I am
 struggling a lot to find an implementation for getting access to the
 2-dimensional arrays in my C++ classes. I have wrapped most of the
 library but the multi-dimensional arrays are problematic. From what I
 have read, I can use NumPy for this..

 There is so much information about NumPy, Numeric etc that it is not
 clear anymore which example I can use best. It's clear that NumPy is
 probably the way to go.

 This is a small example of the class that I would like to wrap:
 struct MyClass {
  float array1[2][2];
  std::vectorfloat array2[2];
 };

 I've also read that I need to call import_array() but where should I put
 this?

 Is there a small example with NumPy c-api to wrap/convert my
 2-dimensional arrays to python?

 Thanks a lot already
 Raymond
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] (newbie) How can I use NumPy to wrap my C++ class with 2-dimensional arrays?

2009-07-30 Thread Matthieu Brucher
2009/7/30 Raymond de Vries ree...@zonnet.nl:
 Hi,

 I'm sorry, I guess I did not search properly before For the record,
 I solved my import_array() question: just need to add
 %init %{
 import_array();
 %}

Indeed, the solution is as simple as this ;) The trouble is to find
the information!

 and the typemap for the std::vectordouble works ok. Thanks for that!

You're welcome.

 Now the rest...

 Thanks
 Raymond


 Raymond de Vries wrote:
 Hi Matthieu,

 Thanks for your quick reply! I did not find this page earlier. I have
 studied the first page and now I realize again that all my efforts so
 far (last few days) have crashed on the lack of calling import_array().
 Do you have a suggestion for calling import_array() as well? Everywhere
 I see *that* import_array() needs to be called, but not how...

 I will take a look at the conversion to a single vector. I hope this can
 be avoided because I cannot simply change the library.

 regards
 Raymond


 Matthieu Brucher wrote:

 Hi,

 In fact, it's not that complicated. You may know the way how copying a
 vector, and this is all you need
 (http://matt.eifelle.com/2008/01/04/transforming-a-c-vector-into-a-numpy-array/).
 You will have to copy your data, it is the safest way to ensure that
 the data is always valid.
 For the std::vector[], I suggest you convert it to a single vector,
 as the data inside this array is not contiguous and it can thus be
 cumbersome to create a Numpy array from that.

 Once the structure is prepared, you have to allocate the dimensions,
 but this may be available online.

 In case you know that the C++ data won't go away before the Python
 array, you can always wrap the container
 (http://matt.eifelle.com/2008/11/04/exposing-an-array-interface-with-swig-for-a-cc-structure/)
 with SWIG.

 Matthieu

 2009/7/30 Raymond de Vries ree...@zonnet.nl:


 Hi everyone,

 (I sent this message yesterday as well but somehow it didn't come
 through...)

 I would like to ask your advice how I can use NumPy to wrap my existing
 C++ library with 2-dimensional arrays. I am wrapping the library with
 swig and I have the typemap function declaration. But now I am
 struggling a lot to find an implementation for getting access to the
 2-dimensional arrays in my C++ classes. I have wrapped most of the
 library but the multi-dimensional arrays are problematic. From what I
 have read, I can use NumPy for this..

 There is so much information about NumPy, Numeric etc that it is not
 clear anymore which example I can use best. It's clear that NumPy is
 probably the way to go.

 This is a small example of the class that I would like to wrap:
 struct MyClass {
  float array1[2][2];
  std::vectorfloat array2[2];
 };

 I've also read that I need to call import_array() but where should I put
 this?

 Is there a small example with NumPy c-api to wrap/convert my
 2-dimensional arrays to python?

 Thanks a lot already
 Raymond
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion







 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] (newbie) How can I use NumPy to wrap my C++ class with 2-dimensional arrays?

2009-07-30 Thread Matthieu Brucher
2009/7/30 Raymond de Vries ree...@zonnet.nl:
 Hi

 Indeed, the solution is as simple as this ;) The trouble is to find
 the information!

 Yes, there is so much information everywhere. And it's hard to make the
 first steps.
 For the std::vector[], I suggest you convert it to a single vector,
 as the data inside this array is not contiguous and it can thus be
 cumbersome to create a Numpy array from that.

 I am now ready to do this. To be certain, 'contiguous' means that the
 std::vector's are not the same length, right? Would that mean that I'd
 better use a tuple of lists or so? (or list of lists or so).

 thanks for your time
 Raymond

Contiguous means that the whole data is in one big chunk. If it is
not, you have to rely on other Numpy functions (I don't know all of
them, perhaps you will find one that satisfies your need), and the
data may then be copied (not sure though).

Matthieu
-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Optimizing reduction loops (sum(), prod(), et al.)

2009-07-09 Thread Matthieu Brucher
2009/7/9 Pauli Virtanen pav...@iki.fi:
 On 2009-07-08, Stéfan van der Walt ste...@sun.ac.za wrote:
 I know very little about cache optimality, so excuse the triviality of
 this question: Is it possible to design this loop optimally (taking
 into account certain build-time measurable parameters), or is it the
 kind of thing that can only be discovered by tuning at compile-time?
 ATNumPy... scary :-)

 I'm still kind of hoping that it's possible to make some minimal
 assumptions about CPU caches in general, and have a rule that
 decides a code path that is good enough, if not optimal.

Unfortunately, this is not possible. We've been playing with blocking
loops for a long time in finite difference schemes, and it is always
compiler dependent (that is, the optimal size of the block is
bandwidth dependent and even operation dependent).

Matthieu
-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] performing operations in-place in numpy

2009-07-09 Thread Matthieu Brucher
2009/7/9 Citi, Luca lc...@essex.ac.uk:
 Hello

 The problem is not PyArray_Conjugate itself.
 The problem is that whenever you call a function from the C side
 and one of the inputs has ref_count 1, it can be overwritten.
 This is not a problem from the python side because if the
 ufunc sees a ref_count=1 it means that no python object is referencing to it.

Does this also hold if you are using the Numpy API directly? Say I've
decided to write some operation with the Numpy API, I will never have
one of my arrays with ref_count == 1?

Matthieu
-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Optimizing reduction loops (sum(), prod(), et al.)

2009-07-09 Thread Matthieu Brucher
2009/7/9 David Cournapeau da...@ar.media.kyoto-u.ac.jp:
 Matthieu Brucher wrote:

 Unfortunately, this is not possible. We've been playing with blocking
 loops for a long time in finite difference schemes, and it is always
 compiler dependent

 You mean CPU dependent, right ? I can't see how a reasonable optimizing
 compiler could make a big difference on cache effects ?

Yes, of course, CU dependent...

 @ Pauli: if (optionally) knowing a few cache info would help you, I
 could implement it. It should not be too difficult for most cases we
 care about,

 David

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] performance matrix multiplication vs. matlab

2009-06-09 Thread Matthieu Brucher
2009/6/9 Robin robi...@gmail.com:
 On Mon, Jun 8, 2009 at 7:14 PM, David Warde-Farleyd...@cs.toronto.edu wrote:

 On 8-Jun-09, at 8:33 AM, Jason Rennie wrote:

 Note that EM can be very slow to converge:

 That's absolutely true, but EM for PCA can be a life saver in cases where
 diagonalizing (or even computing) the full covariance matrix is not a
 realistic option. Diagonalization can be a lot of wasted effort if all you
 care about are a few leading eigenvectors. EM also lets you deal with
 missing values in a principled way, which I don't think you can do with
 standard SVD.

 EM certainly isn't a magic bullet but there are circumstances where it's
 appropriate. I'm a big fan of the ECG paper too. :)

 Hi,

 I've been following this with interest... although I'm not really
 familiar with the area. At the risk of drifting further off topic I
 wondered if anyone could recommend an accessible review of these kind
 of dimensionality reduction techniques... I am familiar with PCA and
 know of diffusion maps and ICA and others, but I'd never heard of EM
 and I don't really have any idea how they relate to each other and
 which might be better for one job or the other... so some sort of
 primer would be really handy.

Hi,

Check Ch. Bishop publication on Probabilistic Principal Components
Analysis, you have there the parallel between the two (EM is in fact
just a way of computing PPCA, and with some Gaussian assumptions, you
get PCA).

Matthieu
-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] second 2d fft gives the same result as fft+ifft

2009-06-09 Thread Matthieu Brucher
Hi,

Is it really ?
You only show the imaginary part of the FFT, so you can't be sure of
what you are saying.
Don't forget that the only difference between FFT and iFFT is (besides
of teh scaling factor) a minus sign in the exponent.

Matthieu

2009/6/9 bela bela.miha...@gmail.com:

 I tried to calculate the second fourier transformation of an image with the
 following code below:

 ---
 import pylab
 import numpy

 ### Create a simple image

 fx = numpy.zeros( 128**2 ).reshape(128,128).astype( numpy.float )

 for i in xrange(8):
        for j in xrange(8):
                fx[i*8+16][j*8+16] = 1.0

 ### Fourier Transformations

 Ffx = numpy.copy( numpy.fft.fft2( fx ).real )   # 1st fourier
 FFfx = numpy.copy( numpy.fft.fft2( Ffx ).real )  # 2nd fourier
 IFfx = numpy.copy( numpy.fft.ifft2( Ffx ).real )   # inverse fourier

 ### Display result

 pylab.figure( 1, figsize=(8,8), dpi=125 )

 pylab.subplot(221)
 pylab.imshow( fx, cmap=pylab.cm.gray )
 pylab.colorbar()
 pylab.title( fx )

 pylab.subplot(222)
 pylab.imshow( Ffx, cmap=pylab.cm.gray )
 pylab.colorbar()
 pylab.title( Ffx )

 pylab.subplot(223)
 pylab.imshow( FFfx, cmap=pylab.cm.gray )
 pylab.colorbar()
 pylab.title( FFfx )

 pylab.subplot(224)
 pylab.imshow( IFfx, cmap=pylab.cm.gray )
 pylab.colorbar()
 pylab.title( IFfx )

 pylab.show()
 ---

 On my computer FFfx is the same as IFfx. but why?

 I uploaded a screenshot about my result here:
 http://server6.theimagehosting.com/image.php?img=second_fourier.png

 Bela


 --
 View this message in context: 
 http://www.nabble.com/second-2d-fft-gives-the-same-result-as-fft%2Bifft-tp23945026p23945026.html
 Sent from the Numpy-discussion mailing list archive at Nabble.com.

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] performance matrix multiplication vs. matlab

2009-06-08 Thread Matthieu Brucher
2009/6/8 Gael Varoquaux gael.varoqu...@normalesup.org:
 On Mon, Jun 08, 2009 at 12:29:08AM -0400, David Warde-Farley wrote:
 On 7-Jun-09, at 6:12 AM, Gael Varoquaux wrote:

  Well, I do bootstrapping of PCAs, that is SVDs. I can tell you, it
  makes
  a big difference, especially since I have 8 cores.

 Just curious Gael: how many PC's are you retaining? Have you tried
 iterative methods (i.e. the EM algorithm for PCA)?

 I am using the heuristic exposed in
 http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4562996

 We have very noisy and long time series. My experience is that most
 model-based heuristics for choosing the number of PCs retained give us
 way too much on this problem (they simply keep diverging if I add noise
 at the end of the time series). The algorithm we use gives us ~50
 interesting PCs (each composed of 50 000 dimensions). That happens to be
 quite right based on our experience with the signal. However, being
 fairly new to statistics, I am not aware of the EM algorithm that you
 mention. I'd be interested in a reference, to see if I can use that
 algorithm. The PCA bootstrap is time-consuming.

Hi,

Given the number of PCs, I think you may just be measuring noise.
As said in several manifold reduction publications (as the ones by
Torbjorn Vik who published on robust PCA for medical imaging), you
cannot expect to have more than 4 or 5 meaningful PCs, due to the
dimensionality curse. If you want 50 PCs, you have to have at least...
10^50 samples, which is quite a lot, let's say it this way.
According to the litterature, a usual manifold can be described by 4
or 5 variables. If you have more, it is that you may be infringing
your hypothesis, here the linearity of your data (and as it is medical
imaging, you know from the beginning that this hypothesis is wrong).
So if you really want to find something meaningful and/or physical,
you should use a real dimensionality reduction, preferably a
non-linear one.

Just my 2 cents ;)

Matthieu
-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] performance matrix multiplication vs. matlab

2009-06-08 Thread Matthieu Brucher
2009/6/8 Gael Varoquaux gael.varoqu...@normalesup.org:
 On Mon, Jun 08, 2009 at 08:58:29AM +0200, Matthieu Brucher wrote:
 Given the number of PCs, I think you may just be measuring noise.
 As said in several manifold reduction publications (as the ones by
 Torbjorn Vik who published on robust PCA for medical imaging), you
 cannot expect to have more than 4 or 5 meaningful PCs, due to the
 dimensionality curse. If you want 50 PCs, you have to have at least...
 10^50 samples, which is quite a lot, let's say it this way.
 According to the litterature, a usual manifold can be described by 4
 or 5 variables. If you have more, it is that you may be infringing
 your hypothesis, here the linearity of your data (and as it is medical
 imaging, you know from the beginning that this hypothesis is wrong).
 So if you really want to find something meaningful and/or physical,
 you should use a real dimensionality reduction, preferably a
 non-linear one.

 I am not sure I am following you: I have time-varying signals. I am not
 taking a shot of the same process over and over again. My intuition tells
 me that I have more than 5 meaningful patterns.

How many samples do you have? 1? a million? a billion? The problem
with 50 PCs is that your search space is mostly empty, thanks to the
curse of dimensionality. This means that you *should* not try to get a
meaning for the 10th and following PCs.

 Anyhow, I do some more analysis behind that (ICA actually), and I do find
 more than 5 patterns of interest that I not noise.

ICa suffers from the same problems than PCA. And I'm not even talking
about the linearity hypothesis that is never respected.

 So maybe I should be using some non-linear dimensionality reduction, but
 what I am doing works, and I can write a generative model of it. Most
 importantly, it is actually quite computationaly simple.

Thanks linearity ;)
The problem is that you will have a lot of confounds this way (your 50
PCs can in fact be the effect of 5 variables that are nonlinear).

 However, if you can point me to methods that you believe are better (and
 tell me why you believe so), I am all ears.

My thesis was on nonlinear dimensionality reduction (this is why I
believe so, especially in the medical imaging field), but it always
need some adaptation. It depends on what you want to do, the time you
can use to process data, ... Suffice to say we started with PCA some
years ago and we were switching to nonlinear reduction because of the
emptiness of the search space and because of the nonlinearity of the
brain space (no idea what my former lab is doing now, but it is used
for DTI at least).
You should check some books on it, and you surely have to read
something about the curse of dimensionality (at least if you want to
get published, as people know about this issue in the medical field),
even if you do not use nonlinear techniques.

Matthieu
-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] performance matrix multiplication vs. matlab

2009-06-08 Thread Matthieu Brucher
2009/6/8 David Warde-Farley d...@cs.toronto.edu:

 On 8-Jun-09, at 1:17 AM, David Cournapeau wrote:

 I would not be surprised if David had this paper in mind :)

 http://www.cs.toronto.edu/~roweis/papers/empca.pdf

 Right you are :)

 There is a slight trick to it, though, in that it won't produce an
 orthogonal basis on its own, just something that spans that principal
 subspace. So you typically have to at least extract the first PC
 independently to uniquely orient your basis. You can then either
 subtract off the projection of the data on the 1st PC and find the
 next one, one at at time, or extract a spanning set all at once and
 orthogonalize with respect to the first PC.

 David

Also Ch. Bishop has an article on using EM for PCA, Probabilistic
Principal Components Analysis where I think he proves the equivalence
as well.

Matthieu
-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] scipy 0.7.1rc2 released

2009-06-08 Thread Matthieu Brucher
2009/6/8 Matthieu Brucher matthieu.bruc...@gmail.com:
 I'm trying to compile it with ICC 10.1.018, and it fails :|

 icc: scipy/special/cephes/const.c
 scipy/special/cephes/const.c(94): error: floating-point operation
 result is out of range
  double INFINITY = 1.0/0.0;  /* 99e999; */
                       ^

 scipy/special/cephes/const.c(99): error: floating-point operation
 result is out of range
  double NAN = 1.0/0.0 - 1.0/0.0;
                  ^

 scipy/special/cephes/const.c(99): error: floating-point operation
 result is out of range
  double NAN = 1.0/0.0 - 1.0/0.0;
                            ^

 compilation aborted for scipy/special/cephes/const.c (code 2)
 scipy/special/cephes/const.c(94): error: floating-point operation
 result is out of range
  double INFINITY = 1.0/0.0;  /* 99e999; */
                       ^

 scipy/special/cephes/const.c(99): error: floating-point operation
 result is out of range
  double NAN = 1.0/0.0 - 1.0/0.0;
                  ^

 scipy/special/cephes/const.c(99): error: floating-point operation
 result is out of range
  double NAN = 1.0/0.0 - 1.0/0.0;

 At least, it seems to pick up the Fortran compiler correctly (which
 0.7.0 didn't seem to do ;))

I manually fixed the files (mconf.h as well as ync.c which uses NAN,
which can be not imported if NANS is defined, which is the case here
for ICC), but I ran into an another error (one of the reason I tried
numscons before):

/appli/intel/10.1.018/intel64/fce/bin/ifort -shared -shared
-nofor_main 
build/temp.linux-x86_64-2.5/build/src.linux-x86_64-2.5/scipy/fftpack/_fftpackmodule.o
build/temp.linux-x86_64-2.5/scipy/fftpack/src/zfft.o
build/temp.linux-x86_64-2.5/scipy/fftpack/src/drfft.o
build/temp.linux-x86_64-2.5/scipy/fftpack/src/zrfft.o
build/temp.linux-x86_64-2.5/scipy/fftpack/src/zfftnd.o
build/temp.linux-x86_64-2.5/build/src.linux-x86_64-2.5/fortranobject.o
-Lbuild/temp.linux-x86_64-2.5 -ldfftpack -o
build/lib.linux-x86_64-2.5/scipy/fftpack/_fftpack.so
ld: build/temp.linux-x86_64-2.5/libdfftpack.a(dffti1.o): relocation
R_X86_64_32S against `a local symbol' can not be used when making a
shared object; recompile with -fPIC
build/temp.linux-x86_64-2.5/libdfftpack.a: could not read symbols: Bad value
ld: build/temp.linux-x86_64-2.5/libdfftpack.a(dffti1.o): relocation
R_X86_64_32S against `a local symbol' can not be used when making a
shared object; recompile with -fPIC
build/temp.linux-x86_64-2.5/libdfftpack.a: could not read symbols: Bad value

It seems that the library is not compiled with fPIC (perhaps because
it is a static library?). My compiler options are:

Fortran f77 compiler: ifort -FI -w90 -w95 -xW -axP -O3 -unroll
Fortran f90 compiler: ifort -FR -xW -axP -O3 -unroll
Fortran fix compiler: ifort -FI -xW -axP -O3 -unroll

Matthieu
-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] scipy 0.7.1rc2 released

2009-06-08 Thread Matthieu Brucher
I'm trying to compile it with ICC 10.1.018, and it fails :|

icc: scipy/special/cephes/const.c
scipy/special/cephes/const.c(94): error: floating-point operation
result is out of range
  double INFINITY = 1.0/0.0;  /* 99e999; */
   ^

scipy/special/cephes/const.c(99): error: floating-point operation
result is out of range
  double NAN = 1.0/0.0 - 1.0/0.0;
  ^

scipy/special/cephes/const.c(99): error: floating-point operation
result is out of range
  double NAN = 1.0/0.0 - 1.0/0.0;
^

compilation aborted for scipy/special/cephes/const.c (code 2)
scipy/special/cephes/const.c(94): error: floating-point operation
result is out of range
  double INFINITY = 1.0/0.0;  /* 99e999; */
   ^

scipy/special/cephes/const.c(99): error: floating-point operation
result is out of range
  double NAN = 1.0/0.0 - 1.0/0.0;
  ^

scipy/special/cephes/const.c(99): error: floating-point operation
result is out of range
  double NAN = 1.0/0.0 - 1.0/0.0;

At least, it seems to pick up the Fortran compiler correctly (which
0.7.0 didn't seem to do ;))

Matthieu

2009/6/7 Adam Mercer ramer...@gmail.com:
 On Fri, Jun 5, 2009 at 06:09, David Cournapeaucourn...@gmail.com wrote:

 Please test it ! I am particularly interested in results for scipy
 binaries on mac os x (do they work on ppc).

 Test suite passes on Intel Mac OS X (10.5.7) built from source:

 OK (KNOWNFAIL=6, SKIP=21)
 nose.result.TextTestResult run=3486 errors=0 failures=0

 Cheers

 Adam
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] scipy 0.7.1rc2 released

2009-06-08 Thread Matthieu Brucher
2009/6/8 David Cournapeau da...@ar.media.kyoto-u.ac.jp:
 Matthieu Brucher wrote:
 I'm trying to compile it with ICC 10.1.018, and it fails :|

 icc: scipy/special/cephes/const.c
 scipy/special/cephes/const.c(94): error: floating-point operation
 result is out of range
   double INFINITY = 1.0/0.0;  /* 99e999; */
                        ^

 scipy/special/cephes/const.c(99): error: floating-point operation
 result is out of range
   double NAN = 1.0/0.0 - 1.0/0.0;
                   ^

 scipy/special/cephes/const.c(99): error: floating-point operation
 result is out of range
   double NAN = 1.0/0.0 - 1.0/0.0;
                             ^

 compilation aborted for scipy/special/cephes/const.c (code 2)
 scipy/special/cephes/const.c(94): error: floating-point operation
 result is out of range
   double INFINITY = 1.0/0.0;  /* 99e999; */
                        ^

 scipy/special/cephes/const.c(99): error: floating-point operation
 result is out of range
   double NAN = 1.0/0.0 - 1.0/0.0;
                   ^

 scipy/special/cephes/const.c(99): error: floating-point operation
 result is out of range
   double NAN = 1.0/0.0 - 1.0/0.0;

 At least, it seems to pick up the Fortran compiler correctly (which
 0.7.0 didn't seem to do ;))


 This code makes me cry... I know Visual Studio won't like it either.
 Cephes is a constant source of problems . As I mentioned a couple of
 months ago, I think the only solution is to rewrite most of
 scipy.special, at least the parts using cephes, using for example boost
 algorithms and unit tests. But I have not started anything concrete -
 Pauli did most of the work on scipy.special recently (Kudos to Pauli for
 consistently improving scipy.special, BTW)

 cheers,

It could be simply enhanced by refactoring only mconf.h with proper
compiler flags, and fix yn.c to remove the NAN detection (as it should
be in the mconf.h).
Unfortunately, I have no time for this at the moment (besides the fact
that it is on my workstation, not at home).

Matthieu
-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] scipy 0.7.1rc2 released

2009-06-08 Thread Matthieu Brucher
Good luck with fixing this then :|

I've tried to build scipy with the MKL and ATLAS, and I have in both
cases a segmentation fault. With the MKL, it is the same as in a
previous mail, and for ATLAS it is there:
Regression test for #946. ... Segmentation fault

A bad ATLAS compilation?

Matthieu

 It could be simply enhanced by refactoring only mconf.h with proper
 compiler flags, and fix yn.c to remove the NAN detection (as it should
 be in the mconf.h).

 NAN and co definition should be dealt with the portable definitions we
 have now in numpy - I just have to find a way to reuse the
 corresponding code outside numpy (distutils currently does not handle
 proper installation of libraries built through build_clib), it is on
 my TODO list for scipy 0.8.

 Unfortunately, this is only the tip of the iceberg. A lot of code in
 cephes uses #ifdef on platform specificities, and let's not forget it
 is pre-ANSI C code (KR declarations), with a lot of hidden bugs.\

 cheers,

 David
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] scipy 0.7.1rc2 released

2009-06-08 Thread Matthieu Brucher
2009/6/8 David Cournapeau da...@ar.media.kyoto-u.ac.jp:
 Matthieu Brucher wrote:
 Good luck with fixing this then :|

 I've tried to build scipy with the MKL and ATLAS, and I have in both
 cases a segmentation fault. With the MKL, it is the same as in a
 previous mail, and for ATLAS it is there:
 Regression test for #946. ... Segmentation fault


 Could you try the last revision in the 0.7.x branch ? There were quite a
 few problems with this exact code (that's the only reason why scipy
 0.7.1 is not released yet, actually), and I added an ugly workaround for
 the time being, but that should work,

 David

Is there a way to get a tarball from the repository on the webpage? I
can't do a checkout (no TortoiseSVN installed on my Windows and no web
access from Linux :()

Matthieu
-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


  1   2   3   4   >