On Oct 23, 2014, at 1:36 PM, Seufzer, William J. (LARC-D307) 
<[email protected]> wrote:

> Running on 8 cores I'm able to save a file via fp.dump.write(cTemp, 
> 'filename.dump'). Before dumping I printed cTemp.shape.. it was (14385,). The 
> variable is on a mesh derived from a 3D .geo file.
> 
> From the command line I run python (same version as run on the cluster), load 
> the .dump file and the size is 
> 
>>>> cTemp.shape
> (89833, 14385)

cTemp.shape of (14385,) is almost certainly a local, per-core value. I say 
that, in part because it's O(1/8) of 89833. My guess is that, called from the 8 
core job on the cluster, mesh.globalNumberOfCells would return 89833 and that 
when loading the dump file on a single core FiPy is trying to place the 
14385-element vector at every cell. I don't know why this is happening; I 
thought we'd tested this case, but evidently not. It's challenging to automate 
because our tests run either in parallel or in serial, and this case requires 
both.

It's no surprise that VTKCellViewer chokes on this, as (89833, 14385) is over a 
billion cells. 

> So, it's kind of there, but not quite. I'd be glad to send my .geo file and 
> relevant code under private email.

Go ahead and send it to me, although I won't probably get a chance to look at 
it for a couple of weeks.

Hmmm... The fix Daniel instituted to address 
http://thread.gmane.org/gmane.comp.python.fipy/3550 only applies to grids. I 
don't know if we've done anything to think about the parallel pickling of Gmsh 
meshes.


I apologize for the state of things. For a long time I've not been very happy 
with our ability to export (or import) data sets, particularly for irregular 
geometries, but in around a decade you're the first to push very hard to have 
it, so I couldn't justify spending a lot of time on it. Now I can.
_______________________________________________
fipy mailing list
[email protected]
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to