Hello FiPy-ers,
I am interested in using a term of the form
A *  DIV(GRAD( B))
where A and B are both functions of space and time, DIV is the
divergence operator, and GRAD is the gradient operator.  It looks
similar to a diffusion term however if, in my equation for B I use
ImplicitDiffusionTerm(coeff=A)
it evaluates to
DIV( A *  GRAD(B) )

This is not quite the term I need since, if we carry the operation out, we get
GRAD(A) dot GRAD(B) + A*DIV(GRAD(B)).

What I have tried is to use the term
ImplicitDiffusionTerm(coeff=A) - dot(A.getGrad(), B.getGrad())
in my equation for B.  However, this doesn't evaluate how I expect it
to.  The confusion comes because of the difference between gradients
evaluated at the faces and gradients evaluated at the cell centers.
The very short script at www.rpi.edu/~gathrw/example_code.txt
illustrates what I'm
talking about.  In that script I show that
(A.getArithmeticFaceValue()*B.getFaceGrad()).getDivergence() - (
dot(B.getGrad(),A.getGrad())   +   A*(B.getFaceGrad().getDivergence())
 )
does not equal zero, even though in the world outside of finite
differencing it does.

What is the best way to implement the term I want?  Is there another,
preferable syntax, or should I be more clever with how I form my
correction [dot(A.getGrad(), B.getGrad()] term?

Best,
 - Will Gathright
Rensselaer Polytechnic Institute
Department of Material Science and Engineering

  • [no subject] William Gathright
    • Re: Jonathan Guyer

Reply via email to