> Am 14.02.2016 um 23:19 schrieb Robert McLeod <robbmcl...@gmail.com>:
> 
> Hello everyone,
> 
> I've done some work on making a new version of Numexpr that would fix some of 
> the limitations of the original virtual machine with regards to data types 
> and operation/function count. Basically I re-wrote the Python and C sides to 
> use 4-byte words, instead of null-terminated strings, for operations and 
> passing types.  This means the number of operations and types isn't 
> significantly limited anymore.
> 
> Francesc Alted suggested I should come here and get some advice from the 
> community. I wrote a short proposal on the Wiki here:
> 
> https://github.com/pydata/numexpr/wiki/Numexpr-3.0-Branch-Overview 
> <https://github.com/pydata/numexpr/wiki/Numexpr-3.0-Branch-Overview>
> 
> One can see my branch here:
> 
> https://github.com/robbmcleod/numexpr/tree/numexpr-3.0 
> <https://github.com/robbmcleod/numexpr/tree/numexpr-3.0>
> 
> If anyone has any comments they'd be welcome. Questions from my side for the 
> group:
> 
> 1.) Numpy casting: I downloaded the Numpy source and after browsing it seems 
> the best approach is probably to just use 
> numpy.core.numerictypes.find_common_type?
> 
> 2.) Can anyone foresee any issues with casting build-in Python types (i.e. 
> float and integer) to their OS dependent numpy equivalents? Numpy already 
> seems to do this. 
> 
> 3.) Is anyone enabling the Intel VML library? There are a number of comments 
> in the code that suggest it's not accelerating the code. It also seems to 
> cause problems with bundling numexpr with cx_freeze.
> 
Dear Robert,

thanks for your effort on improving numexpr. Indeed, vectorized math libraries 
(VML) can give a large boost in performance (~5x), except for a couple of basic 
operations (add, mul, div), which current compilers are able to vectorize 
automatically. With recent gcc even more functions are vectorized, see 
https://sourceware.org/glibc/wiki/libmvec 
<https://sourceware.org/glibc/wiki/libmvec> But you need special flags 
depending on the platform (SSE, AVX present?), runtime detection of processor 
capabilities would be nice for distributing binaries. Some time ago, since I 
lost access to Intels MKL, I patched numexpr to use Accelerate/Veclib on os x, 
which is preinstalled on each mac, see https://github.com/geggo/numexpr.git 
<https://github.com/geggo/numexpr.git> veclib_support branch.

As you increased the opcode size, I could imagine providing a bit to switch 
(during runtime) between internal functions and vectorized ones, that would be 
handy for tests and benchmarks.

Gregor

> 4.) I took a stab at converting from distutils to setuputils but this seems 
> challenging with numpy as a dependency. I wonder if anyone has tried 
> monkey-patching so that setup.py build_ext uses distutils and then pass the 
> interpreter.pyd/so as a data file, or some other such chicanery?   
> 
> (I was going to ask about attaching a debugger, but I just noticed: 
> https://wiki.python.org/moin/DebuggingWithGdb 
> <https://wiki.python.org/moin/DebuggingWithGdb>   )
> 
> Ciao,
> 
> Robert
> 
> -- 
> Robert McLeod, Ph.D.
> Center for Cellular Imaging and Nano Analytics (C-CINA)
> Biozentrum der Universität Basel
> Mattenstrasse 26, 4058 Basel
> Work: +41.061.387.3225 <tel:%2B41.061.387.3225>
> robert.mcl...@unibas.ch <mailto:robert.mcl...@unibas.ch>
> robert.mcl...@bsse.ethz.ch <mailto:robert.mcl...@ethz.ch>
> robbmcl...@gmail.com <mailto:robbmcl...@gmail.com>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to