On 1/20/20 7:59 AM, 'Maxi Miller' via deal.II User Group wrote:
> 
> I wrote a short test program which should solve the diffusion equation 
> using the time-stepping class, and implemented both methods. When 
> calculating the matrix and applying it to my solution vector, I get a 
> different result compared to reading the gradients from the solution 
> with get_function_gradients() and multiplying it with the gradients 
> returned by shape_grad(). The results obtained from the matrix 
> multiplication are correct compared to the expected solution, the 
> results obtained from the direct approach are not. Why is that?

Maxi -- if I understand you correctly, you're asking what the difference 
is between computing

   F_i = \int \grad\varphi_i  .  grad u_h

and

   F_i = (AU)_i

where A=Laplace matrix, U=coefficient vector corresponding to u_h.

There shouldn't be a difference in principle, but you have to pay 
attention to what hanging nodes and Dirichlet boundary conditions do. In 
particular, you might have to call F.condense() in the first case.

You only say that the results are different, but not *how* they are 
different. Have you looked at that? Are these two vectors different only 
in hanging nodes? Only for shape functions at the boundary?

Best
  W.

-- 
------------------------------------------------------------------------
Wolfgang Bangerth          email:                 [email protected]
                            www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/3f766669-b116-937d-0c03-986d7eb4c682%40colostate.edu.

Reply via email to