Dear Pai,
I'm very interested in solving a problem with characteristics very similar
to yours. Consequently, I run your modified code of step-17.cc for 30*30*30
cells and for me it takes 6.43s with cg with -np 2 instead of 0.39s. Do you
have any idea where this huge speed up migth come from?
Hi Uwe,
Thank you very much for your information!
Best,
Pai
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups
"deal.II
Dear Pai,
> In TrilinosWrappers:: I found two amg
> preconditioner: PreconditionerAMGMueLu
> > and PreconditionerAMG. What is the general difference between these two
> > preconditioners? For a linear elasticity problem, which one should I
> use?
>
> Pai -- PreconditionerAMG uses the
On 09/04/2018 12:30 AM, Pai Liu wrote:
In your TimerOutput object, do you output wall time or CPU time? Do you
initialize it with the MPI communicator object?
I output *wall time*.
And *I initialized it* with "computing_timer(mpi_communicator, pcout,
TimerOutput::summary,
Hi Wolfgang,
In your TimerOutput object, do you output wall time or CPU time? Do you
> initialize it with the MPI communicator object?
I output *wall time*.
And *I initialized it* with "computing_timer(mpi_communicator, pcout,
TimerOutput::summary, TimerOutput::wall_times)".
Best,
Pai
--
*I just added timing and modify the meshing code (to generate 30*30*30 cells),
and nothing else.*
I added timing to the member function solve() like the following and change
nothing else.
template
unsigned int ElasticProblem::solve ()
{
*TimerOutput::Scope t(computing_timer,
Hi Wolfgang,
Thank you so much for all your detailed explanation. Now I have a general
idea of what all these things are and what I should do for my problem (a
multiple load case problem). I really appreciate your kind help.
BlockJacobi builds an LU decomposition of that part of the matrix
On 08/31/2018 06:21 PM, Pai Liu wrote:
In TrilinosWrappers:: I found two amg preconditioner: PreconditionerAMGMueLu
and PreconditionerAMG. What is the general difference between these two
preconditioners? For a linear elasticity problem, which one should I use?
Pai -- PreconditionerAMG uses
Hi Bruno,
Your explanation makes me clear.Thanks.
In TrilinosWrappers:: I found two amg
preconditioner: PreconditionerAMGMueLu and PreconditionerAMG. What is the
general difference between these two preconditioners? For a linear
elasticity problem, which one should I use?
Best,
Pai
--
The
Hi Wolfgang,
>
> I have to admit that I find the CG times too small to be credible. The
>> last
>> case should have about 200,000 unknowns. It seems implausible to me that
>> you
>> can solve that in 0.4 seconds on 2 processors. What preconditioner do you
>> use,
>> and do you include the
* Try how the run time of both the direct and iterative solvers
change as you increase the number of unknowns. (E.g., start with a
10x10x10 mesh, then try a 20x20x20, ... mesh.)
As suggested, I have added more tests, which are all executed with MPI in
Pai,
On Thursday, August 30, 2018 at 9:41:50 PM UTC-4, Pai Liu wrote:
>
> *I have another question:*
> *For problem with millions of unknowns, the same Dirichlet boundary
> condition and different right hand sides (e.g. rhs1, rhs2, ..., rhs8). How
> can I speed up the solution process with
Hi Wolfgang,
Interesting experiments would be:
>> * Try what happens if you run this in parallel
>
> The timing in my question was already obtained with the command "mpirun
-np 6 ./step-17" (as mentioned in my question).
>> * Try how the run time of both the direct and iterative solvers
On 08/29/2018 09:55 PM, Pai Liu wrote:
I learn form Prof. Wolfgang's lectures that parallel direct solver is
really competitive, when compared with the parallel distributed solver.
So I try to modify the solver in step-17 into a parallel direct solver,
and I found it *extremely solw*.
14 matches
Mail list logo