> This is a known problem with np.linalg.norm, and so is the memory
> consumption. You should use sklearn.utils.extmath.norm for the
> Frobenius norm.
Hmm. Indeed I missed that, but still, this is a bit odd.
sklearn.utils.extmath.norm is slower than raveling on my anaconda with
MKL accelerate setup:
In [2]: from sklearn.utils.extmath import norm
In [3]: %timeit norm(X)
10 loops, best of 3: 21.4 ms per loop
In [4]: %timeit np.sqrt(np.dot(X.ravel(), X.ravel()))
1000 loops, best of 3: 548 µs per loop
And it looks like the sklearn blas call makes a memory copy too:
Line # Mem usage Increment Line Contents
================================================
7 47.0 MiB 0.0 MiB def sumsq(X):
8 47.0 MiB 0.0 MiB return np.sqrt(np.sum(X ** 2))
Filename: fro.py
Line # Mem usage Increment Line Contents
================================================
10 47.0 MiB 0.0 MiB def raveled(X):
11 47.6 MiB 0.6 MiB return
np.sqrt(np.dot(X.ravel(), X.ravel()))
Filename: fro.py
Line # Mem usage Increment Line Contents
================================================
4 39.0 MiB 0.0 MiB def blas(X):
5 47.0 MiB 8.0 MiB return norm(X)
------------------------------------------------------------------------------
November Webinars for C, C++, Fortran Developers
Accelerate application performance with scalable programming models. Explore
techniques for threading, error checking, porting, and tuning. Get the most
from the latest Intel processors and coprocessors. See abstracts and register
http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general