On Tue, Dec 27, 2011 at 2:59 PM, Jonathan Guyer <[email protected]> wrote:
>
> On Dec 21, 2011, at 7:27 PM, James Snyder wrote:
>
>> On Wed, Dec 21, 2011 at 5:22 PM, James Snyder <[email protected]>
>> wrote:
>
>>> I expect that the current measurements will be quite sensitive to the
>>> cell volumes, but the discontinuity mentioned seems to persist at 0.39
>>> vs 0.4 even with varying mesh resolution. Am I doing something
>>> foolish or is there something simple I've missed here? Is the issue
>>> related to something happening in Gmsh or FiPy? I see that the cell
>>> count sometimes drops a bit at the transition, but if I include 0.38
>>> and 0.42 the discrepancy doesn't seem proportional to the drop.
>>
>> Scratch this first question. I figured out what was happening. My
>> boundary selection parameters were including the edge of the outer
>> volume once I dropped below 0.40. I'd still like suggestions or
>> responses for the remaining questions though, if possible.
>
> We're glad you were able to sort this out.
>
>>> If I want current density to be zero at certain boundaries normal to
>>> that surface (insulators), what is the correct way to do that? I see
>>> the new constrain approach, but I'm just setting the faceGrad value
>>> vector to all zeros: "potential.faceGrad.constrain([[0], [0],
>>> [0]],outer_edges)" but this isn't quite right.
>
> Zero flux is the default boundary condition in the finite volume method, so
> you shouldn't need to do anything to achieve that.
>
> If you want to constrain the normal current density to some non-zero value,
> then see the FAQ "How do I apply a fixed flux boundary condition?"
>
> Basically, you add a source that consists of the divergence of the flux you
> want:
>
> >>> eqn = (TransientTerm() + ConvectionTerm(convCoeff)
> ... == DiffusionCoeff(diffCoeff)
> ... + (mesh.exteriorFaces * fluxMagnitude *
> mesh._faceNormals).divergence)
>
>
>>> I tried using physical surface labels in 3D to label faces, and while
>>> the labels came through, none none of the faces are actually tagged in
>>> mesh.physicalFaces. I'm not that interested in tagging values, but
>>> since one way to have "internal boundaries" is to carve out a section
>>> of a mesh, then select specific faces of that internal hole, it would
>>> be nice to use the physical labels.
>
> This is just speculation, but the documentation for "Physical Surface" at
>
> http://geuz.org/gmsh/doc/texinfo/gmsh.html#Surfaces
>
> says "the expression-list on the right hand side should contain the
> identification numbers of all the elementary surfaces that need to be grouped
> inside the physical surface." It appears to me that a Surface Loop is not an
> "elementary surface". If I replace your definitions with:
>
> Physical Surface("Pod") = {Ecf1[], Ecf2[], Ecf3[], Ecf4[],Bod1, Bod2,
> Bod3, Bod4, Ecb4[], Ecb3[], Ecb2[], Ecb1[] };
>
> and
>
> Physical Surface("Tank") = {1002, exbox[1], 1033, exbox[6], exbox[11],
> exbox[16] };
>
> then mesh.physicalFaces seem to be defined with non-empty masks.
Ah, thanks! This is much more reliable and accurate in selecting the
correct cells.
>
> mesh.physicalCells["Water"] was already defined in your script, but
> mesh.physicalCells["Pod"] was empty. I noted the difference was that "Water"
> was defined by
>
> Volume(10014) = {10006, 10007};
> Physical Volume("Water") = 10014;
>
> but "Pod" was defined by
>
> Surface Loop(10006) = {Ecf1[], Ecf2[], Ecf3[], Ecf4[],Bod1, Bod2, Bod3,
> Bod4, Ecb4[], Ecb3[], Ecb2[], Ecb1[] };
> Physical Volume("Pod") = 10006;
>
> Changing this to
>
> Surface Loop(10006) = {Ecf1[], Ecf2[], Ecf3[], Ecf4[],Bod1, Bod2, Bod3,
> Bod4, Ecb4[], Ecb3[], Ecb2[], Ecb1[] };
> Volume(10012) = {10006};
> Physical Volume("Pod") = 10012;
>
> seems to resolve that issue, too.
In this case, I actually wanted that portion of the mesh to be empty,
and I think that forces it to fill in. I was sort-of intentionally
leaving the Physical Volume there as a check to see whether or not I
had accidentally broken something, but perhaps it will end up empty by
default.
>
> For the record, I don't have a clear sense of when you can reuse Gmsh
> identification numbers and when you can't. I think they only need to be
> unique in each geometry category (surfaces vs. volumes).
>
>
>
>>> Is there an easy way to get values for a cross-sectional plane of cell
>>> values on an unstructured mesh like the rectangular prism used in this
>>> example? I see that I can do quite a bit of manipulation in Mayavi,
>>> but it would be nice to have access to something like this without an
>>> extensive custom routine or jumping out to a viewer/postprocessor.
>
> There are two approaches I would take:
>
> One is to use the __call__() method of the field you are interested in to
> obtain interpolated values. The following constructs a 2D mesh that makes an
> XZ slice down the middle, then uses the coordinates of that 2D mesh to
> extract the the values of the 3D field at Y=0.
>
> sliceMesh = (Grid2D(Lx=tankwidth, nx=100,
> Ly=tankheight, ny=100)
> + [[-tankwidth/2.], [-tankheight/2.]])
>
> sliceCenters = sliceMesh.cellCenters.value
> sliceCenters = numerix.insert(sliceCenters, 1, 0., axis=0)
> slice = CellVariable(mesh=sliceMesh, value=potential(sliceCenters))
>
>
> The other is to extract the values of a slab within some distance of the
> position you are interested in:
>
> x, y, z = mesh.cellCenters
> slab = potential[(z < HEIGHT + THICKNESS / 2.) & (z > HEIGHT - THICKNESS /
> 2.)]
>
>
> The first is probably more useful for display purposes, the second is much
> faster.
Thanks, those seem to work quite well. I'd also like to mask out the
empty volume from the Grid2D in the middle, but I suppose what I can
do there is either use the same measurements to make an unstructured
mesh with the center hollowed out (ala the 2d example code posted), or
somehow try and make a planar polygon out of the hollow section and
use something like matplotlib's points_inside_poly:
http://matplotlib.sourceforge.net/api/nxutils_api.html#matplotlib.nxutils.points_inside_poly
>
>
>>> Also, Is there any likelihood or possibility that one might be able to
>>> do parallel computation on unstructured meshes using Trillinos in the
>>> future? I see that currently the parallel computation is only
>>> supported for gridded meshes.
>
> parallel computation on Gmsh-generated meshes is available in the next
> (imminent) release.
This is nice to hear. I just blindly tried this so it looks like it's
in the 2.2-dev series?
On this front, I have a few other questions which relate partly to
serial and partly to parallel-context computations:
If I want to get the current measurements I was computing near the end
of the code that was posted and I want to vary parameters that will
change how the mesh is generated (position of the pod), is there
anything I need to do in particular for parallel computation to ensure
proper synchronization if I just put all of that in a for loop as in
the example? Or should I be using synchronization primitives from
mpi4py? Likewise for the computation of voltages from the
cellvariable or currents derived from gradients of that cell variable
should I just make sure that I only do that on one node (maybe node
0?) and it will use cells from the global set? Say if I'm using these
types of terms of terms for current computation:
topflux =(flux*mesh.physicalFaces['EcBack']).dot(projections).sum()
then getting: topflux.getNumericValue()
I see there's a getGlobalValue, but that doesn't apply to something
that's a constructed operation.
I've also encountered some failures in single CPU serial computations
where I try a range of parameters that will generate new meshes each
time, that don't seem to be related to parameter selections (i.e.:
retry the parameter that the failure occurred at and it succeeds), but
a number of these types of computations serially seems to eventually
fail at something related to meshing. I haven't completely tracked
this down, but I'll file a ticket if it seems to be a bug on fipy's
side and not something I'm doing or an issue wth Gmsh.
One last question:
Is it possible to have units apply to the dimensions of the mesh? It
would be nice to have some unit verification applied to some of the
calculations being done, but it seems like this would be dependent on
knowing that the dimensions of the mesh is in, say, meters?
_______________________________________________
fipy mailing list
[email protected]
http://www.ctcms.nist.gov/fipy
[NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy]