The easiest next step is to use flot32 instead of float64 to reduce memory 
consumption by half:

XArray = np.arange(0, NrHorPixels, 1./sqrt(NrCellsPerPixel), dtype=float32))
YArray = np.arange(0, NrVerPixels, 1./sqrt(NrCellsPerPixel), dtype=float32))

If this not enough you can try to ue the separability of the gaussian (it 
should also run faster)

Zx = Amplitude*exp(-(((XArray-GaussianCenterX)**2/(2*SigmaX**2)))
Zy = exp(-((YArray-GaussianCenterY)**2/(2*SigmaY**2))))
Z = Zx * Zy[:,None]

  Nadav

-----Original Message-----
From: [email protected] on behalf of sicre
Sent: Thu 07-Oct-10 12:21
To: [email protected]
Subject: Re: [Numpy-discussion] Meshgrid with Huge Arrays
 

I used your suggestion, but it keeps getting me those errors mentioned, but
on the definition of Z (line of the following code):

from pylab import *
import numpy as np

#VARIABLES
NrHorPixels=512
NrVerPixels=512
NrCellsPerPixel=16
GaussianCenterX=256
GaussianCenterY=256
SigmaX=1
SigmaY=1
Amplitude = 150

#3D ARRAY
XArray = np.arange(0, NrHorPixels, 1./sqrt(NrCellsPerPixel))
YArray = np.arange(0, NrVerPixels, 1./sqrt(NrCellsPerPixel))
Z =
Amplitude*exp(-(((XArray-GaussianCenterX)**2/(2*SigmaX**2))+((YArray[:,None]-GaussianCenterY)**2/(2*SigmaY**2))))

#PLOT
#pcolormesh(Z)
#colorbar()
-- 
View this message in context: 
http://old.nabble.com/Meshgrid-with-Huge-Arrays-tp29902859p29904941.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.

_______________________________________________
NumPy-Discussion mailing list
[email protected]
http://mail.scipy.org/mailman/listinfo/numpy-discussion

<<winmail.dat>>

_______________________________________________
NumPy-Discussion mailing list
[email protected]
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to