I think there is likely a problem somewhere along the line in the libraries
or how you are using them that causes the problem. You can run with a small
problem and explicit form the inner operators and use direct solvers for the
inner inverses to see what happens to the convergence of the
> El 11 nov 2021, a las 13:41, Matthew Knepley escribió:
>
> On Thu, Nov 11, 2021 at 3:00 AM Majid Rasouli wrote:
> Dear all,
>
> I'm trying to find a brief explanation about the parallel (MPI)
> implementation of matrix-vector multiplication and matrix-matrix
> multiplication in PETSc. I
"Sparse Matrix-Matrix Products Executed Through Coloring" paper is a sequential
algorithm for C=A*B^T.
We do not have paper discussing the parallel mat-mat product algorithms used in
PETSc.
For source code, you may look at MatMatMultSymbolic_MPIAIJ_MPIAIJ() and
Dear Matt,
I just realized that I used PETSC_COMM_SELF instead of PETSC_COMM_WORLD
while performing the above test. I am sorry for that. After fixing it, I
have the following 3 cases:
1. With the below command line options, where I simply create the spherical
surface mesh, everything is fine.
On Thu, Nov 11, 2021 at 1:17 PM Bhargav Subramanya <
bhargav.subrama...@kaust.edu.sa> wrote:
> Dear Matt,
>
> I just realized that I used PETSC_COMM_SELF instead of PETSC_COMM_WORLD
> while performing the above test. I am sorry for that. After fixing it, I
> have the following 3 cases:
>
> 1.
On Thu, Nov 11, 2021 at 3:25 PM Fande Kong wrote:
> Thanks, Satish
>
> "--with-make-np=1" did help us on MUMPS, but we had new trouble with
> hypre now.
>
> It is hard to understand why "gmake install" even failed.
>
Because HYPRE thinks it is better to use 'ln' than the 'install' script
that
Thanks Matt,
I understand completely, the actual error should be
"
ln -s libHYPRE_parcsr_ls-2.20.0.so libHYPRE_parcsr_ls.so gmake[1]: Leaving
directory
`/beegfs1/home/anovak/cardinal/contrib/moose/petsc/arch-moose/externalpackages/git.hypre/src/parcsr_ls'
Error running make; make
On Thu, Nov 11, 2021 at 1:59 PM Matthew Knepley wrote:
> On Thu, Nov 11, 2021 at 3:44 PM Fande Kong wrote:
>
>> Thanks Matt,
>>
>> I understand completely, the actual error should be
>>
>> "
>> ln -s libHYPRE_parcsr_ls-2.20.0.so libHYPRE_parcsr_ls.so gmake[1]:
>> Leaving directory
>>
On Thu, Nov 11, 2021 at 3:44 PM Fande Kong wrote:
> Thanks Matt,
>
> I understand completely, the actual error should be
>
> "
> ln -s libHYPRE_parcsr_ls-2.20.0.so libHYPRE_parcsr_ls.so gmake[1]:
> Leaving directory
>
Dear Matt,
Thanks for the reply.
1. I realized that, for some reason, all command line options don't seem to
work for me. For the simple code that you mentioned (also shown below), the
only command line options that I am successful are
/home/subrambm/petsc/arch-linux-c-debug/bin/mpiexec -n 2
Dear Barry,
Let S = B A^{-1} B^T be the Schur complement, and \hat{S} = B diag(A)^{-1} B^T
denotes the preconditioner.
I also tried KSPComputeExtremeSingularValues for rtol(A) = rtol(\hat{S}) =
1e-14 as you suggested. However, the results did not change much compared to
rtol(A) = rtol(S) =
Dear all,
I'm trying to find a brief explanation about the parallel (MPI)
implementation of matrix-vector multiplication and matrix-matrix
multiplication in PETSc. I have used them for some experiments for a
research paper and I need to provide a brief explanation about the
implementation of
On Thu, Nov 11, 2021 at 3:00 AM Majid Rasouli wrote:
> Dear all,
>
> I'm trying to find a brief explanation about the parallel (MPI)
> implementation of matrix-vector multiplication and matrix-matrix
> multiplication in PETSc. I have used them for some experiments for a
> research paper and I
On Wed, Nov 10, 2021 at 11:45 PM Eric Chamberland <
eric.chamberl...@giref.ulaval.ca> wrote:
> Hi Matthew,
>
> I have joined the ex44.c example so you can see that we see.
>
> The problem is about the "f" field number which I can't see any clue in
> the documentation to what to put there... We
14 matches
Mail list logo