I just use tecplot directly visualize the results. The vorticity contour from
my simple code is continuous, and the results from deal.II is discontinuous
(without L2 projection). Is it possible that the direct solver in Intel mkl did
a similar projection step internally?
___
Maxi,
As usual, it is much easier to help if you provide a complete minimal
example and say how the result differs from what you expect.
Does it only scale certain vector entries? Are the results correct when
running with one MPI process?
How does your approach differ from
https://github.com/deali
I tried implementing it as
data.cell_loop(&LaplaceOperator::local_apply_cell,
this,
dst,
src,
//true,
[&](const unsigned int start_range, const unsigned
int end_range){
On 1/18/20 9:25 AM, David Eaton wrote:
> Thank you for your explanations. Basically I formed a weak form of the PDE
> for
> one element and numerically integrate it at the Gaussian points based on the
> interpolation from the local nodes. Subsequently, I assemble the weak forms
> from all eleme
Thank you for your explanations. Basically I formed a weak form of the PDE for
one element and numerically integrate it at the Gaussian points based on the
interpolation from the local nodes. Subsequently, I assemble the weak forms
from all elements into a global system matrix based on a local-t
Yes, like
here
https://github.com/dealii/dealii/blob/b84270a1d4099292be5b3d43c2ea65f3ee005919/tests/matrix_free/pre_and_post_loops_01.cc#L100-L121
On Saturday, 18 January 2020 12:57:24 UTC+1, Maxi Miller wrote:
>
> In step-48 the inverse mass matrix is applied by moving the inverse data
> into
On 1/17/20 9:11 PM, David Eaton wrote:
>
> Thanks the help from you and the others. The issue of discontinuous vorticity
> field is resolved. Theoretically, I understand the gradient should be
> discontinuous for C0 elements. However, I still want to convince myself with
> a explanation. while
I.e. I should add an elementwise multiplication with the inverse mass matrix
vector as postprocessing function?
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because y
Hi Maxi,
I guess I am not the correct person to explain you the reason for that
assert. But what you are doing is that while calling scale you are messing
with the ghost values (which prevents the compress step).
You should do it only locally. What you might want to check it out are the
new `
In step-48 the inverse mass matrix is applied by moving the inverse data
into a vector and applying the function scale(), i.e. as in the following
code:
data.cell_loop(&LaplaceOperator::local_apply_cell,
this,
dst,
src,
Dear Maxi,
I am not an expert in deal.ii. I have a project which is very similar to
your project. I was wondering if it is possible to contact you.
Best,
On Sat, 18 Jan 2020, 14:41 'Maxi Miller' via deal.II User Group, <
dealii@googlegroups.com> wrote:
> I tried to implement a solver for the non
I tried to implement a solver for the non-linear diffusion equation
(\partial_t u = grad(u(grad u)) - f) using the TimeStepping-Class, the
EmbeddedExplicitRungeKutta-Method and (for assembly) the matrix-free
approach. For initial tests I used the linear heat equation with the
solution u = exp(-
I tried to implement a solver for the non-linear diffusion equation
(\partial_t u = grad(u(grad u)) - f) using the TimeStepping-Class, the
EmbeddedExplicitRungeKutta-Method and (for assembly) the matrix-free
approach. For initial tests I used the linear heat equation with the
solution u = exp(-
13 matches
Mail list logo