of these 100 times, I've found the python version to run
between 10-20 times slower. My question is if there is a faster way to
do this? Perhaps I'm not using the correct functions/structures? Or
this is as good as it gets?
Thanks on beforehand,
Sebastian Beca
Department of Computer Science
Please ignore if you recieve this.
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion
Thanks! Avoiding the inner loop is MUCH faster (~20-300 times than the
original). Nevertheless I don't think I can use hypot as it only works
for two dimensions. The general problem I have is:
A = random( [C, K] )
B = random( [N, K] )
C ~ 1-10
N ~ Large (thousands, millions.. i.e. my dataset)
K
Please replace:
C = 4
N = 1000
d = zeros([C, N], dtype=float)
BK.
___
Numpy-discussion mailing list
Numpy-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/numpy-discussion
), nevetheless the improvement over calculating
each value as in d1 is significant (10-300 times) and enough for my
needs. Thanks to all.
Sebastian Beca
PD: I also tried the d5 version Alex sent but the results are not the
same so I couldn't compare.
My final version was:
K = 10
C = 3
N = 2500
in the backend so I can't argument as to
why one should scale better than the other.
Regards,
Sebastian.
On 6/19/06, Alan G Isaac [EMAIL PROTECTED] wrote:
On Sun, 18 Jun 2006, Tim Hochberg apparently wrote:
Alan G Isaac wrote:
On Sun, 18 Jun 2006, Sebastian Beca apparently wrote:
def dist():
d