And it looks like you have a well behaved Laplacian here (M-matrix) so I
would guess 'richardson' would be faster as the smoother, instead of
'chebyshev'.
On Fri, Mar 4, 2016 at 5:04 PM, Mark Adams wrote:
> You seem to have 3 of one type of solve that is give 'square_graph 1':
You seem to have 3 of one type of solve that is give 'square_graph 1':
0] PC*GAMG*Coarsen_AGG(): Square Graph on level 1 of 1 to square
This has 9 nnz-row and 44% are zero:
[0] PC*GAMG*FilterGraph(): 55.7114% nnz after filtering, with threshold
0., 8.79533 nnz ave.
So you want to use a
You're right. This is what I have:
[0] PCSetUp_*GAMG*(): level 0) N=48000, n data rows=1, n data cols=1,
nnz/row (ave)=9, np=1
[0] PC*GAMG*FilterGraph(): 55.7114% nnz after filtering, with threshold
0., 8.79533 nnz ave. (N=48000)
[0] PC*GAMG*Coarsen_AGG(): Square Graph on level 1 of 1 to
Time to solution went from 100 seconds to 30 seconds once i used 10 graphs.
Using 20 graphs started to increase in time slightly
On Fri, Mar 4, 2016 at 8:35 AM, Justin Chang wrote:
> You're right. This is what I have:
>
> [0] PCSetUp_*GAMG*(): level 0) N=48000, n data
> On 4 Mar 2016, at 15:24, Justin Chang wrote:
>
> So with -pc_gamg_square_graph 10 I get the following:
Because you're using gamg inside the fieldsplit, I think you need:
-fieldsplit_1_pc_gamg_square_graph 10
> [0] PCSetUp_GAMG(): level 0) N=48000, n data rows=1, n
Mark,
Using "-pc_gamg_square_graph 10" didn't change anything. I used values of
1, 10, 100, and 1000 and the performance seemed unaffected.
Changing the threshold of -pc_gamg_threshold to 0.8 did decrease wall-clock
time but it required more iterations.
I am not really sure how I go about
You have a very sparse 3D problem, with 9 non-zeros per row. It is
coarsening very slowly and creating huge coarse grids. which are expensive
to construct. The superlinear speedup is from cache effects, most likely.
First try with:
-pc_gamg_square_graph 10
ML must have some AI in there to do
On Wed, Mar 2, 2016 at 5:28 PM, Justin Chang wrote:
> Dear all,
>
> Using the firedrake project, I am solving this simple mixed poisson
> problem:
>
> mesh = UnitCubeMesh(40,40,40)
> V = FunctionSpace(mesh,"RT",1)
> Q = FunctionSpace(mesh,"DG",0)
> W = V*Q
>
> v, p =
On 02/03/16 22:28, Justin Chang wrote:
...
> Down solver (pre-smoother) on level 3
>
> KSP Object: (solver_fieldsplit_1_mg_levels_3_)
> linear system matrix = precond matrix:
...
> Mat Object: 1 MPI processes
>
>
On Wed, Mar 2, 2016 at 7:15 PM, Justin Chang wrote:
> Barry,
>
> Attached are the log_summary output for each preconditioner.
>
MatPtAP takes all the time. It looks like there is no coarsening at all at
the first level. Mark, can you see what is going on here?
Matt
>
Justin,
Do you have the -log_summary output for these runs?
Barry
> On Mar 2, 2016, at 4:28 PM, Justin Chang wrote:
>
> Dear all,
>
> Using the firedrake project, I am solving this simple mixed poisson problem:
>
> mesh = UnitCubeMesh(40,40,40)
> V =
Dear all,
Using the firedrake project, I am solving this simple mixed poisson problem:
mesh = UnitCubeMesh(40,40,40)
V = FunctionSpace(mesh,"RT",1)
Q = FunctionSpace(mesh,"DG",0)
W = V*Q
v, p = TrialFunctions(W)
w, q = TestFunctions(W)
f = Function(Q)
12 matches
Mail list logo