Yes it is true the execution time is much faster with the numpy function.

 The Code for numpy version:

def createMatrix(n):
Matrix = np.empty(shape=(n,n), dtype='float64')
for x in range(n):
for y in range(n):
Matrix[x, y] = 0.1 + ((x*y)%1000)/1000.0
return Matrix



if __name__ == '__main__':
n = getDimension()
if n > 0:
A = createMatrix(n)
B = createMatrix(n)
C = np.empty(shape=(n,n), dtype='float64')
C = np.dot(A,B)

#print(C)

In the pure python version I am just implementing the multiplication with
three for-loops.

Measured data with libmemusage:
dimension of matrix: 100x100
heap peak pure python3: 1060565
heap peakt numpy function: 4917180


2017-02-28 23:17 GMT+01:00 Matthew Brett <matthew.br...@gmail.com>:

> Hi,
>
> On Tue, Feb 28, 2017 at 2:12 PM, Sebastian K
> <sebastiankas...@googlemail.com> wrote:
> > Thank you for your answer.
> > For example a very simple algorithm is a matrix multiplication. I can see
> > that the heap peak is much higher for the numpy version in comparison to
> a
> > pure python 3 implementation.
> > The heap is measured with the libmemusage from libc:
> >
> >           heap peak
> >                   Maximum of all size arguments of malloc(3), all
> products
> >                   of nmemb*size of calloc(3), all size arguments of
> >                   realloc(3), length arguments of mmap(2), and new_size
> >                   arguments of mremap(2).
>
> Could you post the exact code you're comparing?
>
> I think you'll find that a naive Python 3 matrix multiplication method
> is much, much slower than the same thing with Numpy, with arrays of
> any reasonable size.
>
> Cheers,
>
> Matthew
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to