Re: [Numpy-discussion] ANN: NumExpr3 Alpha

2017-02-17 Thread Robert McLeod
Hi David,

Thanks for your comments, reply below the fold.

On Fri, Feb 17, 2017 at 4:34 PM, Daπid  wrote:

> This is very nice indeed!
>
> On 17 February 2017 at 12:15, Robert McLeod  wrote:
> > * bytes and unicode support
> > * reductions (mean, sum, prod, std)
>
> I use both a lot, maybe I can help you get them working.
>
> Also, regarding "Vectorization hasn't been done yet with cmath
> functions for real numbers (such as sqrt(), exp(), etc.), only for
> complex functions". What is the bottleneck? Is it in GCC or just
> someone has to sit down and adapt it?


I just haven't done it yet.  Basically I'm moving from Switzerland to
Canada in a week so this was the gap to push something out that's usable if
not perfect. Rather I just import cmath functions, which are inlined but I
suspect what's needed is to break them down into their components. For
example, the complex arccos function looks like this:

static void
nc_acos( npy_intp n, npy_complex64 *x, npy_complex64 *r)
{
npy_complex64 a;
for( npy_intp I = 0; I < n; I++ ) {
a = x[I];
_inline_mul( x[I], x[I], r[I] );
_inline_sub( Z_1, r[I], r[I] );
_inline_sqrt( r[I], r[I] );
_inline_muli( r[I], r[I] );
_inline_add( a, r[I], r[I] );
_inline_log( r[I] , r[I] );
_inline_muli( r[I], r[I] );
_inline_neg( r[I], r[I]);
}
}

I haven't sat down and inspected whether the cmath versions get vectorized,
but there's not a huge speed difference between NE2 and 3 for such a
function on float (but their is for complex), so my suspicion is they
aren't.  Another option would be to add a library such as Yeppp! as
LIB_YEPPP or some other library that's faster than glib.  For example the
glib function "fma(a,b,c)" is slower than doing "a*b+c" in NE3, and that's
not how it should be.  Yeppp is also built with Python generating C code,
so it could either be very easy or very hard.

On bytes and unicode, I haven't seen examples for how people use it, so I'm
not sure where to start. Since there's practically not a limitation on the
number of operations now (the library is 1.3 MB now, compared to 1.2 MB for
NE2 with gcc 5.4) the string functions could grow significantly from what
we have in NE2.

With regards to reductions, NumExpr never multi-threaded them, and could
only do outer reductions, so in the end there was no speed advantage to be
had compared to having NumPy do them on the result.  I suspect the primary
value there was in PyTables and Pandas where the expression had to do
everything.  One of the things I've moved away from in NE3 is doing output
buffering (rather it pre-allocates the output array), so for reductions the
understanding NumExpr has of broadcasting would have to be deeper.

In any event contributions would certainly be welcome.

Robert

-- 
Robert McLeod, Ph.D.
Center for Cellular Imaging and Nano Analytics (C-CINA)
Biozentrum der Universität Basel
Mattenstrasse 26, 4058 Basel
Work: +41.061.387.3225 <061%20387%2032%2025>
robert.mcl...@unibas.ch
robert.mcl...@bsse.ethz.ch 
robbmcl...@gmail.com
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building external c modules with mingw64 / numpy

2017-02-17 Thread Schnizer, Pierre
Dear Ralf,

I made some further improvements as one problem was related to 
my setup file.
I will use numpy git repository to cross check it and then report again.

Sincerely yours
Pierre


Von: NumPy-Discussion [mailto:numpy-discussion-boun...@scipy.org] Im Auftrag 
von Ralf Gommers
Gesendet: Dienstag, 14. Februar 2017 11:00
An: Discussion of Numerical Python 
Betreff: Re: [Numpy-discussion] Building external c modules with mingw64 / numpy



On Sat, Jan 21, 2017 at 9:23 PM, Schnizer, Pierre 
mailto:pierre.schni...@helmholtz-berlin.de>>
 wrote:
Dear all,

   I built  an external c-module (pygsl) using mingw 64 from msys2 
mingw64-gcc compiler.

This  built required  some changes  to numpy.distutils to get the
“python setup.py config”
and
“python setup.py build”
working. In this process  I replaced  2 files in numpy.distutils  from numpy 
git repository:

-  
numpy.dist_utils.misc_utils.py version 
ec0e046 

 on 14 Dec 2016

-  numpy.dist_utils. mingw32ccompiler.py version ec0e046 

 on 14 Dec 2016

mingw32ccompiler.py required to be modified to get it work

•  preprocessor  had to be defined  as I am using  setup.py config

•  specifying the runtime library search path to the linker

•  include path  of the vcrtruntime

I attached a patch reflecting the changes  I had to make  to file 
mingw32ccompile.py
If this information is useful I am  happy to answer questions

Thanks for the patch Pierre. For future reference: a pull request on GitHub or 
a link to a Gist is preferred for us and usually gets you a response quicker.
Regarding your question in the patch on including Python's install directory: 
that shouldn't be necessary, and I'd be wary of applying your patch without 
understanding why the current numpy.distutils code doesn't work for you. But if 
your patch works for you then it can't hurt I think.

Cheers,
Ralf


Sincerely yours
   Pierre

PS  Version infos:
Python:
Python 3.6.0 (v3.6.0:41df79263a11, Dec 23 2016, 08:06:12) [MSC v.1900 64 bit 
(AMD64)] on win32

Numpy:
>> help(numpy.version)
Help on module numpy.version in numpy:
DATA
full_version = '1.12.0'
git_revision = '561f1accf861ad8606ea2dd723d2be2b09a2dffa'
release = True
short_version = '1.12.0'
version = '1.12.0'

gcc.exe (Rev2, Built by MSYS2 project) 6.2.0





Helmholtz-Zentrum Berlin für Materialien und Energie GmbH

Mitglied der Hermann von Helmholtz-Gemeinschaft Deutscher Forschungszentren e.V.

Aufsichtsrat: Vorsitzender Dr. Karl Eugen Huthmacher, stv. Vorsitzende Dr. 
Jutta Koch-Unterseher
Geschäftsführung: Prof. Dr. Anke Rita Kaysser-Pyzalla, Thomas Frederking

Sitz Berlin, AG Charlottenburg, 89 HRB 5583

Postadresse:
Hahn-Meitner-Platz 1
D-14109 Berlin

http://www.helmholtz-berlin.de

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion




Helmholtz-Zentrum Berlin für Materialien und Energie GmbH

Mitglied der Hermann von Helmholtz-Gemeinschaft Deutscher Forschungszentren e.V.

Aufsichtsrat: Vorsitzender Dr. Karl Eugen Huthmacher, stv. Vorsitzende Dr. 
Jutta Koch-Unterseher
Geschäftsführung: Prof. Dr. Anke Rita Kaysser-Pyzalla, Thomas Frederking

Sitz Berlin, AG Charlottenburg, 89 HRB 5583

Postadresse:
Hahn-Meitner-Platz 1
D-14109 Berlin

http://www.helmholtz-berlin.de
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumExpr3 Alpha

2017-02-17 Thread Daπid
This is very nice indeed!

On 17 February 2017 at 12:15, Robert McLeod  wrote:
> * bytes and unicode support
> * reductions (mean, sum, prod, std)

I use both a lot, maybe I can help you get them working.

Also, regarding "Vectorization hasn't been done yet with cmath
functions for real numbers (such as sqrt(), exp(), etc.), only for
complex functions". What is the bottleneck? Is it in GCC or just
someone has to sit down and adapt it?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumExpr3 Alpha

2017-02-17 Thread Francesc Alted
Yay!  This looks really exciting.  Thanks for all the hard work!

Francesc

2017-02-17 12:15 GMT+01:00 Robert McLeod :

> Hi everyone,
>
> I'm pleased to announce that a new branch of NumExpr has been developed
> that will hopefully lead to a new major version release in the future.  You
> can find the branch on the PyData github repository, and installation is as
> follows:
>
> git clone https://github.com/pydata/numexpr.git
> cd numexpr
> git checkout numexpr-3.0
> python setup.py install
>
> What's new?
> ==
>
> Faster
> -
>
> The operations were re-written in such a way that gcc can auto-vectorize
> the loops to use SIMD instructions. Each operation now has a strided and
> aligned branch, which improves performance on aligned arrays by ~ 40 %. The
> setup time for threads has been reduced, by removing an unnecessary
> abstraction layer, and various other minor re-factorizations, resulting in
> improved thread scaling.
>
> The combination of speed-ups means that NumExpr3 often runs 200-500 %
> faster than NumExpr2.6 on a machine with AVX2 support. The break-even point
> with NumPy is now roughly arrays with 64k-elements, compared to
> 256-512k-elements for NE2.
>
> Plot of comparative performance for NumPy versus NE2 versus NE3 over a
> range of array sizes are available at:
>
> http://entropyproduction.blogspot.ch/2017/02/introduction-
> to-numexpr-3-alpha.html
>
> More NumPy Datatypes
> 
>
> The program was re-factorized from a ascii-encoded byte code to a struct
> array, so that the operation space is now 65535 instead of 128.  As such,
> support for uint8, int8, uint16, int16, uint32, uint64, and complex64 data
> types was added.
>
> NumExpr3 now uses NumPy 'safe' casting rules. If an operation doesn't
> return the same result as NumPy, it's a bug.  In the future other casting
> styles will be added if there is a demand for them.
>
>
> More complete function set
> 
>
> With the enhanced operation space, almost the entire C++11 cmath function
> set is supported (if the compiler library has them; only C99 is expected).
> Also bitwise operations were added for all integer datatypes. There are now
> 436 operations/functions in NE3, with more to come, compared to 190 in NE2.
>
> Also a library-enum has been added to the op keys which allows multiple
> backend libraries to be linked to the interpreter, and changed on a
> per-expression basis, rather than picking between GNU std and Intel VML at
> compile time, for example.
>
>
> More complete Python language support
> --
>
> The Python compiler was re-written from scratch to use the CPython `ast`
> module and a functional programming approach. As such, NE3 now compiles a
> wider subset of the Python language. It supports multi-line evaluation, and
> assignment with named temporaries.  The new compiler spends considerably
> less time in Python to compile expressions, about 200 us for 'a*b' compared
> to 550 us for NE2.
>
> Compare for example:
>
> out_ne2 = ne2.evaluate( 'exp( -sin(2*a**2) - cos(2*b**2) -
> 2*a**2*b**2' )
>
> to:
>
> neObj = NumExp( '''a2 = a*a; b2 = b*b
> out_magic = exp( -sin(2*a2) - cos(2*b2) - 2*a2*b2''' )
>
> This is a contrived example but the multi-line approach will allow for
> cleaner code and more sophisticated algorithms to be encapsulated in a
> single NumExpr call. The convention is that intermediate assignment targets
> are named temporaries if they do not exist in the calling frame, and full
> assignment targets if they do, which provides a method for multiple
> returns. Single-level de-referencing (e.g. `self.data`) is also supported
> for increased convenience and cleaner code. Slicing still needs to be
> performed above the ne3.evaluate() or ne3.NumExpr() call.
>
>
> More maintainable
> -
>
> The code base was generally refactored to increase the prevalence of
> single-point declarations, such that modifications don't require extensive
> knowledge of the code. In NE2 a lot of code was generated by the
> pre-processor using nested #defines.  That has been replaced by a
> object-oriented Python code generator called by setup.py, which generates
> about 15k lines of C code with 1k lines of Python. The use of generated
> code with defined line numbers makes debugging threaded code simpler.
>
> The generator also builds the autotest portion of the test submodule, for
> checking equivalence between NumPy and NumExpr3 operations and functions.
>
>
> What's TODO compared to NE2?
> --
>
> * strided complex functions
> * Intel VML support (less necessary now with gcc auto-vectorization)
> * bytes and unicode support
> * reductions (mean, sum, prod, std)
>
>
> What I'm looking for feedback on
> 
>
> * String arrays: How do you use them?  How would unicode differ from bytes
> strings?
>

[Numpy-discussion] ANN: NumExpr3 Alpha

2017-02-17 Thread Robert McLeod
Hi everyone,

I'm pleased to announce that a new branch of NumExpr has been developed
that will hopefully lead to a new major version release in the future.  You
can find the branch on the PyData github repository, and installation is as
follows:

git clone https://github.com/pydata/numexpr.git
cd numexpr
git checkout numexpr-3.0
python setup.py install

What's new?
==

Faster
-

The operations were re-written in such a way that gcc can auto-vectorize
the loops to use SIMD instructions. Each operation now has a strided and
aligned branch, which improves performance on aligned arrays by ~ 40 %. The
setup time for threads has been reduced, by removing an unnecessary
abstraction layer, and various other minor re-factorizations, resulting in
improved thread scaling.

The combination of speed-ups means that NumExpr3 often runs 200-500 %
faster than NumExpr2.6 on a machine with AVX2 support. The break-even point
with NumPy is now roughly arrays with 64k-elements, compared to
256-512k-elements for NE2.

Plot of comparative performance for NumPy versus NE2 versus NE3 over a
range of array sizes are available at:

http://entropyproduction.blogspot.ch/2017/02/introduction-to-numexpr-3-
alpha.html

More NumPy Datatypes


The program was re-factorized from a ascii-encoded byte code to a struct
array, so that the operation space is now 65535 instead of 128.  As such,
support for uint8, int8, uint16, int16, uint32, uint64, and complex64 data
types was added.

NumExpr3 now uses NumPy 'safe' casting rules. If an operation doesn't
return the same result as NumPy, it's a bug.  In the future other casting
styles will be added if there is a demand for them.


More complete function set


With the enhanced operation space, almost the entire C++11 cmath function
set is supported (if the compiler library has them; only C99 is expected).
Also bitwise operations were added for all integer datatypes. There are now
436 operations/functions in NE3, with more to come, compared to 190 in NE2.

Also a library-enum has been added to the op keys which allows multiple
backend libraries to be linked to the interpreter, and changed on a
per-expression basis, rather than picking between GNU std and Intel VML at
compile time, for example.


More complete Python language support
--

The Python compiler was re-written from scratch to use the CPython `ast`
module and a functional programming approach. As such, NE3 now compiles a
wider subset of the Python language. It supports multi-line evaluation, and
assignment with named temporaries.  The new compiler spends considerably
less time in Python to compile expressions, about 200 us for 'a*b' compared
to 550 us for NE2.

Compare for example:

out_ne2 = ne2.evaluate( 'exp( -sin(2*a**2) - cos(2*b**2) - 2*a**2*b**2'
)

to:

neObj = NumExp( '''a2 = a*a; b2 = b*b
out_magic = exp( -sin(2*a2) - cos(2*b2) - 2*a2*b2''' )

This is a contrived example but the multi-line approach will allow for
cleaner code and more sophisticated algorithms to be encapsulated in a
single NumExpr call. The convention is that intermediate assignment targets
are named temporaries if they do not exist in the calling frame, and full
assignment targets if they do, which provides a method for multiple
returns. Single-level de-referencing (e.g. `self.data`) is also supported
for increased convenience and cleaner code. Slicing still needs to be
performed above the ne3.evaluate() or ne3.NumExpr() call.


More maintainable
-

The code base was generally refactored to increase the prevalence of
single-point declarations, such that modifications don't require extensive
knowledge of the code. In NE2 a lot of code was generated by the
pre-processor using nested #defines.  That has been replaced by a
object-oriented Python code generator called by setup.py, which generates
about 15k lines of C code with 1k lines of Python. The use of generated
code with defined line numbers makes debugging threaded code simpler.

The generator also builds the autotest portion of the test submodule, for
checking equivalence between NumPy and NumExpr3 operations and functions.


What's TODO compared to NE2?
--

* strided complex functions
* Intel VML support (less necessary now with gcc auto-vectorization)
* bytes and unicode support
* reductions (mean, sum, prod, std)


What I'm looking for feedback on


* String arrays: How do you use them?  How would unicode differ from bytes
strings?
* Interface: We now have a more object-oriented interface underneath the
familiar
  evaluate() interface. How would you like to use this interface?  Francesc
suggested
  generator support, as currently it's more difficult to use NumExpr within
a loop than
  it should be.


Ideas for the future
-

* vectoriz