On Sun, May 4, 2014 at 9:34 PM, srean srean.l...@gmail.com wrote:
Hi all,
is there an efficient way to do the following without allocating A where
A = np.repeat(x, [4, 2, 1, 3], axis=0)
c = A.dot(b)# b.shape
If x is a 2D array you can call repeat **after** dot, not before, which
Great ! thanks. I should have seen that.
Is there any way array multiplication (as opposed to matrix multiplication)
can be sped up without forming A and (A * b) explicitly.
A = np.repeat(x, [4, 2, 1, 3], axis = 0)# A.shape == 10,10
c = sum(b * A, axis = 1)# b.shape
If b is indeed big I don't see a problem with the python loop, elegance
aside; but Cython will not beat it on that front.
On Mon, May 5, 2014 at 9:34 AM, srean srean.l...@gmail.com wrote:
Great ! thanks. I should have seen that.
Is there any way array multiplication (as opposed to matrix
On 5/3/14, 11:56 PM, Siegfried Gonzi wrote:
Hi all
I noticed IDL uses at least 400% (4 processors or cores) out of the box
for simple things like reading and processing files, calculating the
mean etc.
I have never seen this happening with numpy except for the linalgebra
stuff (e.g