Re: [Numpy-discussion] new mingw-w64 based numpy and scipy wheel (still experimental)

2015-01-27 Thread Nathaniel Smith
On Tue, Jan 27, 2015 at 8:53 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:

 On Mon, Jan 26, 2015 at 4:30 PM, Carl Kleffner cmkleff...@gmail.com wrote:

 Thanks for all your ideas. The next version will contain an augumented
 libopenblas.dll  in both numpy and scipy. On the long term I would prefer an
 external openblas wheel package, if there is an agreement about this among
 numpy-dev.


 Sounds fine in principle, but reliable dependency handling will be hard to
 support in setup.py. You'd want the dependency on Openblas when installing a
 complete set of wheels, but not make it impossible to use:

   - building against ATLAS/MKL/... from source with pip or distutils
   - allowing use of a local wheelhouse which uses ATLAS/MKL/... wheels
   - pip install numpy --no-use-wheel
   - etc.

 Static bundling is a lot easier to get right.

In principle I think this should be easy: when installing a .whl, pip
or whatever looks at the dependencies declared in the distribution
metadata file inside the wheel. When installing via setup.py, pip or
whatever uses the dependencies declared by setup.py. We just have to
make sure that the wheels we distribute have the right metadata inside
them and everything should work.

Accomplishing this may be somewhat awkward with existing tools, but as
a worst-case/proof-of-concept approach we could just have a step in
the wheel build that opens up the .whl and edits it to add the
dependency. Ugly, but it'd work.

-n

-- 
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] new mingw-w64 based numpy and scipy wheel (still experimental)

2015-01-27 Thread Ralf Gommers
On Mon, Jan 26, 2015 at 4:30 PM, Carl Kleffner cmkleff...@gmail.com wrote:

 Thanks for all your ideas. The next version will contain an augumented
 libopenblas.dll  in both numpy and scipy. On the long term I would prefer
 an external openblas wheel package, if there is an agreement about this
 among numpy-dev.


Sounds fine in principle, but reliable dependency handling will be hard to
support in setup.py. You'd want the dependency on Openblas when installing
a complete set of wheels, but not make it impossible to use:

  - building against ATLAS/MKL/... from source with pip or distutils
  - allowing use of a local wheelhouse which uses ATLAS/MKL/... wheels
  - pip install numpy --no-use-wheel
  - etc.

Static bundling is a lot easier to get right.


 Another idea for the future is to conditionally load a debug version of
 libopenblas instead. Together with the backtrace.dll (part of mingwstatic,
 but undocumentated right now) a meaningfull stacktrace in case of segfaults
 inside the code comiled with mingwstatic will be given.


 2015-01-26 2:16 GMT+01:00 Sturla Molden sturla.mol...@gmail.com:

 On 25/01/15 22:15, Matthew Brett wrote:

  I agree, that shipping openblas with both numpy and scipy seems
  perfectly reasonable to me - I don't think anyone will much care about
  the 30M, and I think our job is to make something that works with the
  least complexity and likelihood of error.

 Yes. Make something that works first, optimize for space later.


+1

Ralf


   It would be good to rename the dll according to the package and
  version though, to avoid a scipy binary using a pre-loaded but
  incompatible 'libopenblas.dll'.   Say something like
  openblas-scipy-0.15.1.dll - on the basis that there can only be one
  copy of scipy loaded at a time.

 That is a good idea and we should do this for NumPy too I think.



 Sturla


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] new mingw-w64 based numpy and scipy wheel (still experimental)

2015-01-27 Thread Ralf Gommers
On Tue, Jan 27, 2015 at 10:13 PM, Nathaniel Smith n...@pobox.com wrote:

 On Tue, Jan 27, 2015 at 8:53 PM, Ralf Gommers ralf.gomm...@gmail.com
 wrote:
 
  On Mon, Jan 26, 2015 at 4:30 PM, Carl Kleffner cmkleff...@gmail.com
 wrote:
 
  Thanks for all your ideas. The next version will contain an augumented
  libopenblas.dll  in both numpy and scipy. On the long term I would
 prefer an
  external openblas wheel package, if there is an agreement about this
 among
  numpy-dev.
 
 
  Sounds fine in principle, but reliable dependency handling will be hard
 to
  support in setup.py. You'd want the dependency on Openblas when
 installing a
  complete set of wheels, but not make it impossible to use:
 
- building against ATLAS/MKL/... from source with pip or distutils
- allowing use of a local wheelhouse which uses ATLAS/MKL/... wheels
- pip install numpy --no-use-wheel
- etc.
 
  Static bundling is a lot easier to get right.

 In principle I think this should be easy: when installing a .whl, pip
 or whatever looks at the dependencies declared in the distribution
 metadata file inside the wheel. When installing via setup.py, pip or
 whatever uses the dependencies declared by setup.py. We just have to
 make sure that the wheels we distribute have the right metadata inside
 them and everything should work.

 Accomplishing this may be somewhat awkward with existing tools, but as
 a worst-case/proof-of-concept approach we could just have a step in
 the wheel build that opens up the .whl and edits it to add the
 dependency. Ugly, but it'd work.


Good point, that should work. Not all that much uglier than some of the
other stuff we do in release scripts for Windows binaries.

Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] new mingw-w64 based numpy and scipy wheel (still experimental)

2015-01-27 Thread Matthew Brett
Hi,

On Tue, Jan 27, 2015 at 1:37 PM, Carl Kleffner cmkleff...@gmail.com wrote:


 2015-01-27 22:13 GMT+01:00 Nathaniel Smith n...@pobox.com:

 On Tue, Jan 27, 2015 at 8:53 PM, Ralf Gommers ralf.gomm...@gmail.com
 wrote:
 
  On Mon, Jan 26, 2015 at 4:30 PM, Carl Kleffner cmkleff...@gmail.com
  wrote:
 
  Thanks for all your ideas. The next version will contain an augumented
  libopenblas.dll  in both numpy and scipy. On the long term I would
  prefer an
  external openblas wheel package, if there is an agreement about this
  among
  numpy-dev.
 
 
  Sounds fine in principle, but reliable dependency handling will be hard
  to
  support in setup.py. You'd want the dependency on Openblas when
  installing a
  complete set of wheels, but not make it impossible to use:
 
- building against ATLAS/MKL/... from source with pip or distutils
- allowing use of a local wheelhouse which uses ATLAS/MKL/... wheels
- pip install numpy --no-use-wheel
- etc.
 
  Static bundling is a lot easier to get right.

 In principle I think this should be easy: when installing a .whl, pip
 or whatever looks at the dependencies declared in the distribution
 metadata file inside the wheel. When installing via setup.py, pip or
 whatever uses the dependencies declared by setup.py. We just have to
 make sure that the wheels we distribute have the right metadata inside
 them and everything should work.

 Accomplishing this may be somewhat awkward with existing tools, but as
 a worst-case/proof-of-concept approach we could just have a step in
 the wheel build that opens up the .whl and edits it to add the
 dependency. Ugly, but it'd work.

My 'delocate' utility has a routine for patching wheels :

pip install delocate
delocate-patch --help

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] new mingw-w64 based numpy and scipy wheel (still experimental)

2015-01-27 Thread Sturla Molden
On 27/01/15 11:32, Carl Kleffner wrote:

 OpenBLAS in the test wheels is build with DYNAMIC_ARCH, that is all
 assembler based kernels are included and are choosen at runtime.

Ok, I wasn't aware of that option. Last time I built OpenBLAS I think I 
had to specify the target CPU.

  Non
 optimized parts of Lapack have been build with -march=sse2.

Since LAPACK delegates almost all of its heavy lifting to BLAS, there is 
probably not a lot to gain from SSE3, SSE4 or AVX here.


Sturla

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Float view of complex array

2015-01-27 Thread Jaime Fernández del Río
On Mon, Jan 26, 2015 at 10:28 PM, Jens Jørgen Mortensen je...@fysik.dtu.dk
wrote:

 On 01/26/2015 11:02 AM, Jaime Fernández del Río wrote:
  On Mon, Jan 26, 2015 at 1:41 AM, Sebastian Berg
  sebast...@sipsolutions.net mailto:sebast...@sipsolutions.net wrote:
 
  On Mo, 2015-01-26 at 09:24 +0100, Jens Jørgen Mortensen wrote:
   Hi!
  
   I have a view of a 2-d complex array that I would like to view
  as a 2-d
   float array.  This works OK:
  
 np.ones((2, 4), complex).view(float)
   array([[ 1.,  0.,  1.,  0.,  1.,  0.,1.,  0.],
   [ 1.,  0.,  1.,  0.,  1.,  0.,  1.,  0.]])
  
   but this doesn't:
  
 np.ones((2, 4), complex)[:, :2].view(float)
   Traceback (most recent call last):
  File stdin, line 1, in module
   ValueError: new type not compatible with array.
 np.__version__
   '1.9.0'
  
   and I don't understand why.  When looking at the memory layout,
  I think
   it should be possible.
  
 
  Yes, it should be possible, but it is not :). You could hack it by
  using
  `np.ndarray` (or stride tricks). Or maybe you are interested
  making the
  checks whether it makes sense or not less strict.
 
 
  How would it be possible? He goes from an array with 16 byte strides
  along the last axis:
 
  r0i0, r1i1, r2i2, r3i3
 
  to one with 32 byte strides, which is OK
 
  r0i0, , r2i2, 
 
  but everything breaks down when he wants to have alternating strides
  of 8 and 24 bytes:
 
  r0, i0, , r2, i2, 

 No, that is not what I want.  I want this:

 r0, i0, r1, i1, , 

 with stride 8 on the last axis - which should be fine.  My current
 workaround is to do a copy() before view() - thanks Maniteja.


My bad, you are absolutely right, Jens...

I have put together a quick PR (https://github.com/numpy/numpy/pull/5508)
that fixes your use case, by relaxing the requirements for views of
different dtypes. I'd appreciate if you could take a look at the logic in
the code (it is profusely commented), and see if you can think of other
cases that can be viewed as another dtype that I may have overlooked.

Thanks,

Jaime



Jens Jørgen

 
  which cannot be hacked in any sensible way.
 
  What I think could be made to work, but also fails, is this:
 
  np.ones((2, 4), complex).reshape(2, 4, 1)[:, :2, :].view(float)
 
  Here the original strides are (64, 16, xx) and the resulting view
  should have strides (64, 32, 8), not sure what trips this.
 
  Jaime
 
 
  - Sebastian
 
   Jens Jørgen
  
   ___
   NumPy-Discussion mailing list
   NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
   http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 
 
  --
  (\__/)
  ( O.o)
  (  ) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus
  planes de dominación mundial.
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
(\__/)
( O.o)
(  ) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
de dominación mundial.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] new mingw-w64 based numpy and scipy wheel (still experimental)

2015-01-27 Thread Carl Kleffner
2015-01-27 0:16 GMT+01:00 Sturla Molden sturla.mol...@gmail.com:

 On 26/01/15 16:30, Carl Kleffner wrote:

  Thanks for all your ideas. The next version will contain an augumented
  libopenblas.dll  in both numpy and scipy. On the long term I would
  prefer an external openblas wheel package, if there is an agreement
  about this among numpy-dev.


 Thanks for all your great work on this.

 An OpenBLAS wheel might be a good idea. Probably we should have some
 sort of instruction on the website how to install the binary wheel. And
 then we could include the OpenBLAS wheel in the instruction. Or we could
 have the OpenBLAS wheel as a part of the scipy stack.

 But make the bloated SciPy and NumPy wheels work first, then we can
 worry about a dedicated OpenBLAS wheel later :-)


  Another idea for the future is to conditionally load a debug version of
  libopenblas instead. Together with the backtrace.dll (part of
  mingwstatic, but undocumentated right now) a meaningfull stacktrace in
  case of segfaults inside the code comiled with mingwstatic will be given.

 An OpenBLAS wheel could also include multiple architectures. We can
 compile OpenBLAS for any kind of CPUs and and install the one that fits
 best with the computer.


OpenBLAS in the test wheels is build with DYNAMIC_ARCH, that is all
assembler based kernels are included and are choosen at runtime. Non
optimized parts of Lapack have been build with -march=sse2.


 Also note that an OpenBLAS wheel could be useful on Linux. It is clearly
 superior to the ATLAS libraries that most distros ship. If we make a
 binary wheel that works for Windows, we are almost there for Linux too :-)


I have in mind, that binary wheels are not supported for Linux. Maybe this
could be done as conda package for Anaconda/Miniconda as an OSS alternative
to MKL.


 For Apple we don't need OpenBLAS anymore. On OSX 10.9 and 10.10
 Accelerate Framework is actually faster than MKL under many
 circumstances. DGEMM is about the same, but e.g. DAXPY and DDOT are
 faster in Accelerate.


 Sturla


























 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion