On Thu, Mar 8, 2018 at 9:20 AM, Charles R Harris <charlesr.har...@gmail.com> wrote:
> > > On Thu, Mar 8, 2018 at 2:52 AM, Gregor Thalhammer < > gregor.thalham...@gmail.com> wrote: > >> >> Hi, >> >> long time ago I wrote a wrapper to to use optimised and parallelized math >> functions from Intels vector math library >> geggo/uvml: Provide vectorized math function (MKL) for numpy >> <https://github.com/geggo/uvml> >> >> I found it useful to inject (some of) the fast methods into numpy via >> np.set_num_ops(), to gain more performance without changing my programs. >> > > I think that was much of the original motivation for `set_num_ops` back in > the Numeric days, where there was little commonality among platforms and > getting hold of optimized libraries was very much an individual thing. The > former cblas module, now merged with multiarray, was present for the same > reasons. > > >> >> While this original project is outdated, I can imagine that a centralised >> way to swap the implementation of math functions is useful. Therefor I >> suggest to keep np.set_num_ops(), but admittedly I do not understand all >> the technical implications of the proposed change. >> > > I suppose we could set it up to detect and use an external library during > compilation. The CBLAS implementations currently do that and should pick up > the MKL version when available. Where are the MKL functions you used > presented? That is an admittedly lower level interface, however. > > Note that Intel is also working to support NumPy and intends to use the Intel optimizations as part of that. Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion