On 3/17/20 11:04 PM, Vishal Subramanian wrote:
To effect this, I modified the step-27 tutorial. The exact changes made are
described in the later part. To test if the code is working correctly, I took
the solution to be (-(x+y+z) ). Hence, the right-hand side of the equation
becomes zero everywhere in the domain.
Expectation :
Since the exact solution is linear with respect to position, I expected the
l2-norm of my error to be of the less than the tolerance to which the problem
is solved. As I keep solving the problem after further refining the mesh, I
expected the error to be of the same order.
Observation :
I observe that the error is as expected in my first cycle ( where I am solving
on a uniform mesh with the order of the basis functions as 1). I believe that
this implies that the boundary conditions have and all the assembly has been
done correctly.
In the second cycle, the error is still below tolerance. The mesh contains
elements with order 1 and 2.
From the third cycle, the error is large. I am not able to explain this
sudden increase in the error.
Questions :
Could you please let me know as to where I am going wrong?
Vishal,
I could try to dig into your program, but I think it's more important to teach
a person how to fish than to give them fish :-)
So here are a few questions that may help you figure out what might be wrong:
1/ The L2 norm of the error and the linear solver tolerance are not
particularly well related. That's because the L2 norm of the error is computed as
sqrt{ \int (u-u_h)^2 }
and so has the units of the solution vector times length^(d/2). On the other
hand, the linear solver tolerance is a residual and is computed as
sqrt{ \sum_i (F_i-\sum_j A_{ij} U_j }
In other words, it has the units of the right hand side vector, which is
generally the units of the right hand side function times length^d (from the
integration). It's easy to see that depending on the size of your domain, the
size of coefficients in the problem, the physical units you use to measure
(meters, lightyears, Angstrom) you will be able to scale these two quantities
in completely different ways. So there is a conceptual problem here already.
2/ Now, you say that the error is "large". It's hard to tell how large that
actually is -- 1e-5, 1e+32? If it's 1e32, we would have seen that in the
picture. If it's 1e-5, is that large? I don't actually know -- I'd say "it
depends". One thing worth trying out, though, is to see where the error
actually is large. You suggest that it has something to do with higher order
elements. Leaving aside the fact that we've been developing and testing this
code for 15 years now and that there are several hundreds of tests for this
part of the library (meaning: I think it's unlikely, though not impossible
that there is a bug you run into), you can output the vector with cell-wise
errors via DataOut and see where the error is located. Is it at the boundary?
On cells with polynomial degree 3 and higher? Often *seeing* where the problem
is help narrow down where in the code one has to look.
Best
W.
--
------------------------------------------------------------------------
Wolfgang Bangerth email: [email protected]
www: http://www.math.colostate.edu/~bangerth/
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/dealii/4fc58f8f-8b18-23d4-c33f-4f9c747565b3%40colostate.edu.