On Mon, Oct 25, 2010 at 7:48 AM, Citi, Luca <lc...@essex.ac.uk> wrote:
> Hello, > I have noticed a significant speed difference between the array and the > matrix implementation of the dot product, especially for not-so-big > matrices. > For example: > > In [1]: import numpy as np > In [2]: b = np.random.rand(104,1) > In [3]: bm = np.mat(b) > In [4]: a = np.random.rand(8, 104) > In [5]: am = np.mat(a) > In [6]: %timeit np.dot(a, b) > 1000000 loops, best of 3: 1.74 us per loop > In [7]: %timeit am * bm > 100000 loops, best of 3: 6.38 us per loop > > The results for two different PCs (PC1 with windows/EPD6.2-2 and PC2 with > ubuntu/numpy-1.3.0) and two different sizes are below: > > array matrix > > 8x104 * 104x1 > PC1 1.74us 6.38us > PC2 1.23us 5.85us > > 8x10 * 10x5 > PC1 2.38us 7.55us > PC2 1.56us 6.01us > > For bigger matrices the timings seem to asymptotically approach. > > Is it something worth trying to fix or should I just accept this as a fact > and, when working with small matrices, stick to array? > > Looks like call overhead, it will become significant when the dot product itself doesn't take much time. If you want to spend time profiling it might be possible to find some spots in the matrix class that can be sped up. My guess is that it is the time taken to create the matrices. But if you don't want to get sidetracked sticking to arrays is the easy way to go. Chuck
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion