A somewhat related question that also pertains to the recent memory
use question:  I'm importing EM fields over unstructured grids as
DX vector fields, but usually I use compute to pick off individual
components (ie. x, y, or z) and plot the resulting scalar field.  My
data sets are getting so big now that I'm having difficulty working.
Would I be better off (memory wise) importing the scalar components
as scalar DX fields and using compute to create the vector field for
those infrequent times when I want to visualize the vector glyphs?

Thanks!


I think the math is roughly this:
x = 1 mu (memory unit) = y = z
[x,y,z] = 3 mu

Import as vector,   3 mu
Compute (a.x)     ++1 mu
Compute (a.y)     ++1 mu
Compute (a.z)     ++1 mu
Total:              6 mu

Import as scalars,  3 mu (assuming you bring all 3 in at the start)
Compute ([a,b,c]) ++3 mu
Total:              6 mu

So, in a preprocessing net, Import vector, Compute (a.x), Export (use "dx ieee 2" format which makes a nice ASCII header file and a nasty bin file for the data), repeat for a.y and a.z.

Now, you can viz vectors in one session, then Disconnect/Start Server to regain the memory and Import each component separately. Worse case is 3 mu per session. Same net should work for both with a bit of Flow Control, probably done with a few Inquires to detect data rank and shape after Import.

--
Chris Pelkie
Managing Partner
Practical Video LLC
30 West Meadow Drive,  Ithaca,  NY  14850

Reply via email to