On Fri, Oct 18, 2013 at 9:33 AM, Daniel Wheeler
<[email protected]>wrote:

> On Thu, Oct 17, 2013 at 5:19 PM, James Snyder <[email protected]>
> wrote:
>
> >
> > The result of this is that it interpolates values inside the pill-shaped
> > region that should be empty so what I've been doing is getting the
> > faceCenters for the faces of that pill, computing a convex hull and then
> > using that to 1) mask out values that got interpolated in the region
> where
> > there were no cells 2) plot a nice shaded region covering that.
>
> Okay, so you just need the global face centers for the boundary
> without any topology. Like you said before, you can just use the MPI
> primitives to do this. Ordering and overlapping points are not a
> problem for a convex hull calculation. So the following will give you
> a non-unique subset of global face centers. The right faces in this
> case, but you can select based on a geometric formula I assume.
>
>   import fipy as fp
>   import numpy as np
>
>   m = fp.Grid2D(nx=10, ny=10)
>
>   fcr = m.faceCenters[:,np.array(m.facesRight)]
>
>   fcrGlobal = np.concatenate(m.communicator.allgather(np.array(fcr)),
> axis=-1)
>
>   if m.communicator.procID == 0:
>       print fcrGlobal
>

Success! It seems to also work without modification on the non-MPI runs
too, nice.


> >> How are you findining using FiPy in parallel? What sort of speed ups
> >> are you getting?
> >
> >
> > Qualitatively, the solver "feels" like it scales well with added cores,
> but
> > I haven't run numbers yet.  I'll try to do a little instrumentation on it
> > for upcoming solver runs.
>
> Thanks for the feedback.
>

I've started by sticking some time.time() calls before/after the call out
to Gmsh and the equation.solve() calls.  Should that be adequate for
testing the time needed for solving and meshing?

Now that I've actually tried collecting some numbers on some varied mesh
sizes non-mpi (pysparse) vs mpirun -np 8 --trilinos (on an 8 core machine)
I'm seeing things that are less than 2x for the solver section on a 3-D
mesh with element counts from 100k-750k. I'm re-running things now to check
that I've done the comparison consistently since at one point I tried
messing around with the solver and completion conditions.  I presume the
most fair comparison might be with a selected solver type (LinearPCG for
example) and the same termination conditions (tolerance/iterations)?

Do you have any expected scaling characteristics?  I'd be happy to
instrument this a bit and run some exploration of parameter space if it
would be of use or interest. I see that there was some work done on
profiling FiPy, but it doesn't look like that's in tools in the main branch.


> > One thing I have noticed is that the Python processes still seem to be
> > pegging the CPUs while I presume they should be waiting on Gmsh to
> compute a
> > mesh.
>
> Strange, I have no idea what's happening there.
>

I'll look into that a little more since it's unexpected.  It's been a while
since I've tried to do any debugging of Python with MPI, but I could at
least try and provide some sort of test case that does it for me along with
version information for any of the dependencies being used.


>
> --
> Daniel Wheeler
> _______________________________________________
> fipy mailing list
> [email protected]
> http://www.ctcms.nist.gov/fipy
>   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>



-- 
James Snyder
Biomedical Engineering
Northwestern University
ph: (847) 448-0386
_______________________________________________
fipy mailing list
[email protected]
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to