esses have the same peak memory usage. If it were only process 0
then it wouldn't matter, because with enough processes the overhead
would be negligible.
Best regards,
Michael
On 07.10.21 18:32, Matthew Knepley wrote:
> On Thu, Oct 7, 2021 at 11:59 AM Michael Werner <mailto:michael.wer..
(with 4 processes) each process shows a peak memory
usage of 10.8GB
Best regards,
Michael
On 07.10.21 17:55, Barry Smith wrote:
>
>
>> On Oct 7, 2021, at 11:35 AM, Michael Werner > <mailto:michael.wer...@dlr.de>> wrote:
>>
>> Currently I'm using psutil to query e
internal MPI buffers might explain some blip.
>
>
> Is it possible that we free the memory, but the OS has just not given
> back that memory for use yet? How are you measuring memory usage?
>
> Thanks,
>
> Matt
>
>
> Barry
>
>
> > On Oct 7, 2021,
the matrix and to explicitly preallocate the
necessary NNZ (with A.setSizes(dim) and A.setPreallocationNNZ(nnz),
respectively) before loading, but that didn't help.
As mentioned above, I'm using petsc4py together with PETSc-3.16 on a
Linux workstation.
Best regards,
Mich
t; manual):
>
> (A-sigma*B)^{-1}*(A+nu*B)x = \theta x
>
> So nu=-sigma is a forbidden value, otherwise both factors cancel out (I will
> fix the interface so that this is catched).
>
> In your case you should do -eps_target -1 -st_cayley_antishift -1
>
> Jose
>
>
>&
t work with target -1?
> Can you send me the matrices so that I can reproduce the issue?
>
> Jose
>
>
>> El 27 sept 2019, a las 13:11, Michael Werner
>> escribió:
>>
>> Thank you for the link to the paper, it's quite interesting and pretty
>> close to wh
o, it doesn't matter if I'm using exact or inexact solves. Changing
the values of shift and antishift also doesn't change the behaviour. Do
I need to make additional adjustments to get cayley to work?
Best regards,
Michael
Am 25.09.19 um 17:21 schrieb Jose E. Roman:
>
>> El 25 sept 2019
-eps_target_real
With sinvert, it is easy to understand how to chose the target, but for
Cayley I'm not sure how to set shift and antishift. What is the
mathematical meaning of the antishift?
Best regards,
Michael Werner
Jed Brown writes:
Michael Werner writes:
>> > It uses unpreconditioned GMRES to estimate spectral
>> > bounds for
>> > the operator before using a Chebychev smoother.
It is GMRES preconditioned by the diagonal.
Moreover, in the incompressible limit, the co
Matthew Knepley writes:
On Fri, Sep 28, 2018 at 8:13 AM Michael Werner
wrote:
Matthew Knepley writes:
> On Fri, Sep 28, 2018 at 7:43 AM Michael Werner
>
> wrote:
>
>>
>> Matthew Knepley writes:
>>
>> > On Fri, Sep 28, 2018 at 3:23 AM Michael We
Matthew Knepley writes:
On Fri, Sep 28, 2018 at 7:43 AM Michael Werner
wrote:
Matthew Knepley writes:
> On Fri, Sep 28, 2018 at 3:23 AM Michael Werner
>
> wrote:
>
>> Hello,
>>
>> I'm having trouble with getting the AMG preconditioners
>> worki
Matthew Knepley writes:
On Fri, Sep 28, 2018 at 3:23 AM Michael Werner
wrote:
Hello,
I'm having trouble with getting the AMG preconditioners
working. I
tried all of them (gamg, ml, hypre-boomeramg), with varying
degrees of "success":
- GAMG:
CMD options: -ksp
Hello,
I'm having trouble with getting the AMG preconditioners working. I
tried all of them (gamg, ml, hypre-boomeramg), with varying
degrees of "success":
- GAMG:
CMD options: -ksp_rtol 1e-8 -ksp_monitor_true_residual -ksp_max_it
20 -ksp_type fgmres -pc_type gamg -pc_gamg_sym_graph TRUE
convergence for the gd solver? As
far as I know it doesn't use a ksp , so the only way I can think
of to improve convergence would be using a higher quality
preconditioner, right?
Kind regards,
Michael
Jose E. Roman writes:
El 6 ago 2018, a las 14:44, Michael Werner
escribió:
Michael Werner writes
Michael Werner writes:
Hello, I want to use a Davidson-type solver (probably jd) to
find
the eigenvalues with the smallest real part, but so far I'm
strugglung to get them to converge. So I was hoping to get some
advice on the various options available for those solvers.
For my test
xplanation for
--
Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR)
Institut für Aerodynamik und Strömungstechnik | Bunsenstr. 10 |
37073 Göttingen
Michael Werner
Telefon 0551 709-2627 | Telefax 0551 709-2811 |
michael.wer...@dlr.de
DLR.de
are
problem dependent. Also, you can try GD instead of JD, which is
simpler and often gives better performance. See a detailed
explanation here: https://doi.org/10.1145/2543696
Jose
El 1 ago 2018, a las 10:43, Michael Werner
escribió:
Thanks for the quick reply, your suggestion worked
perfect
onditioner matrix should be exactly the same as
A-sigma*B, otherwise you may get unexpected
results. Davidson-type methods allow using a different
preconditioner.
Jose
El 1 ago 2018, a las 10:11, Michael Werner
escribió:
Hello,
I'm trying to find the smallest eigenvalues of a linear syste
Hello,
I'm trying to find the smallest eigenvalues of a linear system
created by CFD simulations. To reduce memory requirements, I want
to use a shell matrix (A_Shell) to provide the matrix-vector
product, and a lower-order explicit matrix (P) as
preconditioner. As I'm solving a generalized
necessary.
So now its possible to simply gather the correct values by their global
ID, pass them to the external code and then scatter the result back to
the parallel vector. Now my code is working as intended.
Thanks for your help!
Kind regards,
Michael Werner
Am 18.10.2017 um 12:01 schrieb
. But this would create a lot of communication
between the different processes and seems quite clunky.
Is there a more elegant way? Is it maybe possible to manually assign the
size of the PETSc subdomains?
Kind regards,
Michael Werner
Am 17.10.2017 um 12:31 schrieb Matthew Knepley:
On Tue, Oct 17
several contesting instances of the computation on the whole domain.
But maybe that's only because I haven't completly understood how MPI
really works in such cases...
Kind regards,
Michael
Am 17.10.2017 um 11:50 schrieb Matthew Knepley:
On Tue, Oct 17, 2017 at 5:46 AM, Michael Werner <michael.
I'm not sure what you mean with this question?
The external CFD code, if that was what you referred to, can be run in
parallel.
Am 17.10.2017 um 11:11 schrieb Matthew Knepley:
On Tue, Oct 17, 2017 at 4:21 AM, Michael Werner <michael.wer...@dlr.de
<mailto:michael.wer...@dlr.de&g
Zampini
<stefano.zamp...@gmail.com <mailto:stefano.zamp...@gmail.com>> wrote:
2017-10-16 10:26 GMT+03:00 Michael Werner <michael.wer...@dlr.de
<mailto:michael.wer...@dlr.de>>:
Hello,
I'm having trouble with parallelizing a matrix-free code with
Hello,
I'm having trouble with parallelizing a matrix-free code with PETSc. In
this code, I use an external CFD code to provide the matrix-vector
product for an iterative solver in PETSc. To increase convergence rate,
I'm using an explicitly stored Jacobian matrix to precondition the
solver.
25 matches
Mail list logo