Please always use reply-all so that your messages go to the list.
This is standard mailing list etiquette. It is important to preserve
threading for people who find this discussion later and so that we do
not waste our time re-answering the same questions that have already
been answered in
On 29 Jul 2014, at 13:37, Jed Brown j...@jedbrown.org wrote:
Please always use reply-all so that your messages go to the list.
Sorry, fat-fingered the buttons.
Lawrence Mitchell lawrence.mitch...@imperial.ac.uk writes:
On 28 Jul 2014, at 23:27, Jed Brown j...@jedbrown.org wrote:
Lawrence Mitchell lawrence.mitch...@imperial.ac.uk writes:
So my coarse space is spanned by the fine one, so I copy coarse dofs to the
corresponding fine ones and then linearly interpolate to get the coefficient
value at the missing fine dofs.
Good, and is restriction the transpose?
Some
On 29/07/14 14:35, Jed Brown wrote:
Lawrence Mitchell lawrence.mitch...@imperial.ac.uk writes:
So my coarse space is spanned by the fine one, so I copy coarse dofs to the
corresponding fine ones and then linearly interpolate to get the coefficient
value at the missing fine dofs.
Good,
Lawrence Mitchell lawrence.mitch...@imperial.ac.uk writes:
No, I'm L2-projecting (with mass-lumping) for the restriction. So if I
weren't lumping, I think this is the dual of the prolongation.
A true L2 projection is a dense operation (involves the inverse mass
matrix). But here, we're
On 29 Jul 2014, at 16:58, Jed Brown j...@jedbrown.org wrote:
Lawrence Mitchell lawrence.mitch...@imperial.ac.uk writes:
No, I'm L2-projecting (with mass-lumping) for the restriction. So if I
weren't lumping, I think this is the dual of the prolongation.
A true L2 projection is a dense
Lawrence Mitchell lawrence.mitch...@imperial.ac.uk writes:
So my approach was to transfer using projection and then use riesz
representation to get the residual from the dual space back into the
primal space so I can apply the operator at the next level. Is there
an obvious reason why this
Lawrence Mitchell lawrence.mitch...@imperial.ac.uk writes:
Bog-standard P1 on a pretty much regularly meshed square domain (i.e. no
reentrant corners or bad elements).
What interpolation is being used? The finite-element embedding should
work well.
Is there something special about the
On 25 Jul 2014, at 21:28, Jed Brown j...@jedbrown.org wrote:
Sorry about not following up. I also find these results peculiar.
Lawrence Mitchell lawrence.mitch...@imperial.ac.uk writes:
So I'm sort of none-the-wiser. I'm a little bit at a loss as to why
this occurs, but either switching
Sorry about not following up. I also find these results peculiar.
Lawrence Mitchell lawrence.mitch...@imperial.ac.uk writes:
So I'm sort of none-the-wiser. I'm a little bit at a loss as to why
this occurs, but either switching to Richardson+SOR or Cheby/SOR with
more that one SOR sweep
On 21 Jul 2014, at 18:29, Jed Brown j...@jedbrown.org wrote:
Lawrence Mitchell lawrence.mitch...@imperial.ac.uk writes:
Below I show output from a run on 1 process and then two (along with
ksp_view) for the following options:
-pc_type mg -ksp_rtol 1e-8 -ksp_max_it 6 -pc_mg_levels 2
Hello all,
I'm implementing a multigrid solver using PCMG, starting with a simple Poisson
equation (with strong boundary conditions) to ensure I'm doing things right.
Everything works fine in serial, but when running on two processes, with the
default chebyshev smoother, convergence goes to
Hi Lawrence,
Hmm this sounds odd. The convergence obtained with chebyshev should be
essentially identical in serial and parallel when using a jacobi
preconditioner
1) How did you configure the coarse grid solver in the serial and parallel
test? Are they consistent?
2) Does using one level with
On 21 Jul 2014, at 11:50, Dave May dave.mayhe...@gmail.com wrote:
Hi Lawrence,
Hmm this sounds odd. The convergence obtained with chebyshev should be
essentially identical in serial and parallel when using a jacobi
preconditioner
So I was maybe a bit unclear in my previous mail:
If I
-pc_type mg -mg_levels_ksp_type richardson -mg_levels_pc_type jacobi
-mg_levels_ksp_max_it 2
then I get identical convergence in serial and parallel
Good. That's the correct result.
if, however, I run with
-pc_type mg -mg_levels_ksp_type chebyshev -mg_levels_pc_type sor
On 21 Jul 2014, at 12:52, Dave May dave.mayhe...@gmail.com wrote:
-pc_type mg -mg_levels_ksp_type richardson -mg_levels_pc_type jacobi
-mg_levels_ksp_max_it 2
then I get identical convergence in serial and parallel
Good. That's the correct result.
if, however, I run with
To follow up,
On 21 Jul 2014, at 13:11, Lawrence Mitchell lawrence.mitch...@imperial.ac.uk
wrote:
On 21 Jul 2014, at 12:52, Dave May dave.mayhe...@gmail.com wrote:
-pc_type mg -mg_levels_ksp_type richardson -mg_levels_pc_type jacobi
-mg_levels_ksp_max_it 2
then I get identical
Lawrence Mitchell lawrence.mitch...@imperial.ac.uk writes:
Below I show output from a run on 1 process and then two (along with
ksp_view) for the following options:
-pc_type mg -ksp_rtol 1e-8 -ksp_max_it 6 -pc_mg_levels 2 -mg_levels_pc_type
sor -ksp_monitor
On 1 process:
0 KSP
Hi Lawrence,
I agree that you shouldn't expect magical things to work when using SOR in
parallel, but I'm a bit surprised you see such variation for Poisson
Take
src/ksp/ksp/examples/tutorials/ex28.c
for example
Running with 1 and 16 cores I get very similar convergence histories
mpiexec -n
19 matches
Mail list logo