[petsc-users] Help building Zoltan with gcc11

2022-09-21 Thread Lucas Banting
Hello all,

I am having a problem building Zoltan using the "--download-Zoltan" using gcc11 
(default version on Ubuntu 22.04).

I have opened an issue on the PETSc gitlab: 
https://gitlab.com/petsc/petsc/-/issues/1248
Which also has my configure.log.

The error is:
Error: Type mismatch between actual argument at (1) and actual argument at (2) 
(CHARACTER(128)/INTEGER(4)).

Which I believe is related to gcc10 changing Fortran compiler behaviour.

Lucas






[petsc-users] PCApplySymmetricRight for PCBJACOBI

2022-09-21 Thread Abylay Zhumekenov
Hello,

I have been playing around with a block Jacobi preconditioner (PCJACOBI)
with an incomplete Cholesky (PCICC) sub-preconditioner. Although you cannot
set KSPSetPCSide to PC_SYMMETRIC for PCBJACOBI, you still can do it for
PCICC. I was surprised to find that PCApplySymmetricLeft is properly
applied to a vector. However, PCApplySymmetricRight throws an error:
...
[0]PETSC ERROR: Object is in wrong state
[0]PETSC ERROR: VecPlaceArray() was already called on this vector,
without a call to VecResetArray()
...
[0]PETSC ERROR: #3 PCApplySymmetricLeft_BJacobi_Singleblock() at
/home/abylay/KAUST/Libraries/C/petsc/src/ksp/pc/impls/bjacobi/bjacobi.c:660
[0]PETSC ERROR: #4 PCApplySymmetricLeft() at
/home/abylay/KAUST/Libraries/C/petsc/src/ksp/pc/interface/precon.c:532
...
The problem goes away if I add:
...
PetscCall(VecResetArray(bjac->x));
PetscCall(VecResetArray(bjac->y));
...
at line 698 in source file bjacobi.c. I don't know if it is a bug, and how
I should report it, just wanted let someone know if it is.
Thanks.

Abylay Zhumekenov


Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange

2022-09-21 Thread Matthew Knepley
On Wed, Sep 21, 2022 at 10:35 AM feng wang  wrote:

> Hi Jose,
>
> For your 2nd suggestion on halo exchange, I get the idea and roughly know
> how to do it, but there are some implementation details which I am not
> quite sure.
>
> If I understand it correctly, in MatMult(Mat m ,Vec x, Vec y), Vec *x* is
> a normal parallel vector and it does not contain halo values. Suppose I
> create an auxiliary ghost vector * x_g*, then I assign the values of *x*
> to *x_g*. The values of the halo for each partition will not be assigned
> at this stage.
>
> But If I call VecGhostUpdateBegin/End(*x_g*, INSERT_VALUES,
> SCATTER_FORWARD), this will fill the values of the halo cells of *x_g *for
> each partition. Then *x_g* has local and halo cells assigned correctly
> and I can use *x_g* to do my computation. Is this what you mean?
>

Yes

  Matt


> Thanks,
> Feng
>
> --
> *From:* Jose E. Roman 
> *Sent:* 21 September 2022 13:07
> *To:* feng wang 
> *Cc:* Matthew Knepley ; petsc-users@mcs.anl.gov <
> petsc-users@mcs.anl.gov>
> *Subject:* Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange
>
>
>
> > El 21 sept 2022, a las 14:47, feng wang 
> escribió:
> >
> > Thanks Jose, I will try this and will come back to this thread if I have
> any issue.
> >
> > Besides, for EPSGetEigenpair, I guess each rank gets its portion of the
> eigenvector, and I need to put them together afterwards?
>
> Eigenvectors are stored in parallel vectors, which are used in subsequent
> parallel computation in most applications. If for some reason you need to
> gather them in a single MPI process you can use e.g.
> VecScatterCreateToZero()
>
> >
> > Thanks,
> > Feng
> >
> > From: Jose E. Roman 
> > Sent: 21 September 2022 12:34
> > To: feng wang 
> > Cc: Matthew Knepley ; petsc-users@mcs.anl.gov <
> petsc-users@mcs.anl.gov>
> > Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange
> >
> > If you define the MATOP_CREATE_VECS operation in your shell matrix so
> that it creates a ghost vector, then all vectors within EPS will be ghost
> vectors, including those that are received as arguments of MatMult(). Not
> sure if this will work.
> >
> > A simpler solution is that you store a ghost vector in the context of
> your shell matrix, and then in MatMult() you receive a regular parallel
> vector x, then update the ghost points using the auxiliary ghost vector, do
> the computation and store the result in the regular parallel vector y.
> >
> > Jose
> >
> >
> > > El 21 sept 2022, a las 14:09, feng wang 
> escribió:
> > >
> > > Thanks for your reply.
> > >
> > > For GMRES, I create a ghost vector and give it to KSPSolve. For Slepc,
> it only takes the shell matrix for EPSSetOperators. Suppose the shell
> matrix of the eigensolver defines MatMult(Mat m ,Vec x, Vec y), how does it
> know Vec x is  a ghost vector and how many ghost cells there are?
> > >
> > > Thanks,
> > > Feng
> > > From: Matthew Knepley 
> > > Sent: 21 September 2022 11:58
> > > To: feng wang 
> > > Cc: petsc-users@mcs.anl.gov 
> > > Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange
> > >
> > > On Wed, Sep 21, 2022 at 7:41 AM feng wang 
> wrote:
> > > Hello,
> > >
> > > I am using Slepc with a shell matrix. The sequential version seems
> working and now I am trying to make it run in parallel.
> > >
> > > The partition of the domain is done, I am not sure how to do the halo
> exchange in the shell matrix in Slepc. I have a parallel version of
> matrix-free GMRES in my code with Petsc. I was using VecCreateGhostBlock to
> create vector with ghost cells, and then used  VecGhostUpdateBegin/End for
> the halo exchange in the shell matrix, would this be the same for Slepc?
> > >
> > > That will be enough for the MatMult(). You would also have to use a
> SLEPc EPS that only needed MatMult().
> > >
> > >   Thanks,
> > >
> > >  Matt
> > >
> > > Thanks,
> > > Feng
> > >
> > >
> > >
> > >
> > > --
> > > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> > > -- Norbert Wiener
> > >
> > > https://www.cse.buffalo.edu/~knepley/
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange

2022-09-21 Thread feng wang
Hi Jose,

For your 2nd suggestion on halo exchange, I get the idea and roughly know how 
to do it, but there are some implementation details which I am not quite sure.

If I understand it correctly, in MatMult(Mat m ,Vec x, Vec y), Vec x is a 
normal parallel vector and it does not contain halo values. Suppose I create an 
auxiliary ghost vector x_g, then I assign the values of x to x_g. The values of 
the halo for each partition will not be assigned at this stage.

But If I call VecGhostUpdateBegin/End(x_g, INSERT_VALUES, SCATTER_FORWARD), 
this will fill the values of the halo cells of x_g for each partition. Then x_g 
has local and halo cells assigned correctly and I can use x_g to do my 
computation. Is this what you mean?

Thanks,
Feng


From: Jose E. Roman 
Sent: 21 September 2022 13:07
To: feng wang 
Cc: Matthew Knepley ; petsc-users@mcs.anl.gov 

Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange



> El 21 sept 2022, a las 14:47, feng wang  escribió:
>
> Thanks Jose, I will try this and will come back to this thread if I have any 
> issue.
>
> Besides, for EPSGetEigenpair, I guess each rank gets its portion of the 
> eigenvector, and I need to put them together afterwards?

Eigenvectors are stored in parallel vectors, which are used in subsequent 
parallel computation in most applications. If for some reason you need to 
gather them in a single MPI process you can use e.g. VecScatterCreateToZero()

>
> Thanks,
> Feng
>
> From: Jose E. Roman 
> Sent: 21 September 2022 12:34
> To: feng wang 
> Cc: Matthew Knepley ; petsc-users@mcs.anl.gov 
> 
> Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange
>
> If you define the MATOP_CREATE_VECS operation in your shell matrix so that it 
> creates a ghost vector, then all vectors within EPS will be ghost vectors, 
> including those that are received as arguments of MatMult(). Not sure if this 
> will work.
>
> A simpler solution is that you store a ghost vector in the context of your 
> shell matrix, and then in MatMult() you receive a regular parallel vector x, 
> then update the ghost points using the auxiliary ghost vector, do the 
> computation and store the result in the regular parallel vector y.
>
> Jose
>
>
> > El 21 sept 2022, a las 14:09, feng wang  escribió:
> >
> > Thanks for your reply.
> >
> > For GMRES, I create a ghost vector and give it to KSPSolve. For Slepc, it 
> > only takes the shell matrix for EPSSetOperators. Suppose the shell matrix 
> > of the eigensolver defines MatMult(Mat m ,Vec x, Vec y), how does it know 
> > Vec x is  a ghost vector and how many ghost cells there are?
> >
> > Thanks,
> > Feng
> > From: Matthew Knepley 
> > Sent: 21 September 2022 11:58
> > To: feng wang 
> > Cc: petsc-users@mcs.anl.gov 
> > Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange
> >
> > On Wed, Sep 21, 2022 at 7:41 AM feng wang  wrote:
> > Hello,
> >
> > I am using Slepc with a shell matrix. The sequential version seems working 
> > and now I am trying to make it run in parallel.
> >
> > The partition of the domain is done, I am not sure how to do the halo 
> > exchange in the shell matrix in Slepc. I have a parallel version of 
> > matrix-free GMRES in my code with Petsc. I was using VecCreateGhostBlock to 
> > create vector with ghost cells, and then used  VecGhostUpdateBegin/End for 
> > the halo exchange in the shell matrix, would this be the same for Slepc?
> >
> > That will be enough for the MatMult(). You would also have to use a SLEPc 
> > EPS that only needed MatMult().
> >
> >   Thanks,
> >
> >  Matt
> >
> > Thanks,
> > Feng
> >
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their 
> > experiments is infinitely more interesting than any results to which their 
> > experiments lead.
> > -- Norbert Wiener
> >
> > https://www.cse.buffalo.edu/~knepley/



Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange

2022-09-21 Thread Jose E. Roman



> El 21 sept 2022, a las 14:47, feng wang  escribió:
> 
> Thanks Jose, I will try this and will come back to this thread if I have any 
> issue.
> 
> Besides, for EPSGetEigenpair, I guess each rank gets its portion of the 
> eigenvector, and I need to put them together afterwards?

Eigenvectors are stored in parallel vectors, which are used in subsequent 
parallel computation in most applications. If for some reason you need to 
gather them in a single MPI process you can use e.g. VecScatterCreateToZero()

> 
> Thanks,
> Feng
> 
> From: Jose E. Roman 
> Sent: 21 September 2022 12:34
> To: feng wang 
> Cc: Matthew Knepley ; petsc-users@mcs.anl.gov 
> 
> Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange
>  
> If you define the MATOP_CREATE_VECS operation in your shell matrix so that it 
> creates a ghost vector, then all vectors within EPS will be ghost vectors, 
> including those that are received as arguments of MatMult(). Not sure if this 
> will work.
> 
> A simpler solution is that you store a ghost vector in the context of your 
> shell matrix, and then in MatMult() you receive a regular parallel vector x, 
> then update the ghost points using the auxiliary ghost vector, do the 
> computation and store the result in the regular parallel vector y.
> 
> Jose
> 
> 
> > El 21 sept 2022, a las 14:09, feng wang  escribió:
> > 
> > Thanks for your reply. 
> > 
> > For GMRES, I create a ghost vector and give it to KSPSolve. For Slepc, it 
> > only takes the shell matrix for EPSSetOperators. Suppose the shell matrix 
> > of the eigensolver defines MatMult(Mat m ,Vec x, Vec y), how does it know 
> > Vec x is  a ghost vector and how many ghost cells there are?
> > 
> > Thanks,
> > Feng
> > From: Matthew Knepley 
> > Sent: 21 September 2022 11:58
> > To: feng wang 
> > Cc: petsc-users@mcs.anl.gov 
> > Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange
> >  
> > On Wed, Sep 21, 2022 at 7:41 AM feng wang  wrote:
> > Hello,
> > 
> > I am using Slepc with a shell matrix. The sequential version seems working 
> > and now I am trying to make it run in parallel.
> > 
> > The partition of the domain is done, I am not sure how to do the halo 
> > exchange in the shell matrix in Slepc. I have a parallel version of 
> > matrix-free GMRES in my code with Petsc. I was using VecCreateGhostBlock to 
> > create vector with ghost cells, and then used  VecGhostUpdateBegin/End for 
> > the halo exchange in the shell matrix, would this be the same for Slepc?
> > 
> > That will be enough for the MatMult(). You would also have to use a SLEPc 
> > EPS that only needed MatMult().
> > 
> >   Thanks,
> > 
> >  Matt
> >  
> > Thanks,
> > Feng 
> > 
> > 
> > 
> > 
> > -- 
> > What most experimenters take for granted before they begin their 
> > experiments is infinitely more interesting than any results to which their 
> > experiments lead.
> > -- Norbert Wiener
> > 
> > https://www.cse.buffalo.edu/~knepley/



Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange

2022-09-21 Thread Jose E. Roman



> El 21 sept 2022, a las 14:47, feng wang  escribió:
> 
> Thanks Jose, I will try this and will come back to this thread if I have any 
> issue.
> 
> Besides, for EPSGetEigenpair, I guess each rank gets its portion of the 
> eigenvector, and I need to put them together afterwards?

Eigenvectors are stored in parallel vectors, which are used in subsequent 
parallel computation in most applications. If for some reason you need to 
gather them in a single MPI process you can use e.g. VecScatterCreateToZero()

> 
> Thanks,
> Feng
> 
> From: Jose E. Roman 
> Sent: 21 September 2022 12:34
> To: feng wang 
> Cc: Matthew Knepley ; petsc-users@mcs.anl.gov 
> 
> Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange
>  
> If you define the MATOP_CREATE_VECS operation in your shell matrix so that it 
> creates a ghost vector, then all vectors within EPS will be ghost vectors, 
> including those that are received as arguments of MatMult(). Not sure if this 
> will work.
> 
> A simpler solution is that you store a ghost vector in the context of your 
> shell matrix, and then in MatMult() you receive a regular parallel vector x, 
> then update the ghost points using the auxiliary ghost vector, do the 
> computation and store the result in the regular parallel vector y.
> 
> Jose
> 
> 
> > El 21 sept 2022, a las 14:09, feng wang  escribió:
> > 
> > Thanks for your reply. 
> > 
> > For GMRES, I create a ghost vector and give it to KSPSolve. For Slepc, it 
> > only takes the shell matrix for EPSSetOperators. Suppose the shell matrix 
> > of the eigensolver defines MatMult(Mat m ,Vec x, Vec y), how does it know 
> > Vec x is  a ghost vector and how many ghost cells there are?
> > 
> > Thanks,
> > Feng
> > From: Matthew Knepley 
> > Sent: 21 September 2022 11:58
> > To: feng wang 
> > Cc: petsc-users@mcs.anl.gov 
> > Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange
> >  
> > On Wed, Sep 21, 2022 at 7:41 AM feng wang  wrote:
> > Hello,
> > 
> > I am using Slepc with a shell matrix. The sequential version seems working 
> > and now I am trying to make it run in parallel.
> > 
> > The partition of the domain is done, I am not sure how to do the halo 
> > exchange in the shell matrix in Slepc. I have a parallel version of 
> > matrix-free GMRES in my code with Petsc. I was using VecCreateGhostBlock to 
> > create vector with ghost cells, and then used  VecGhostUpdateBegin/End for 
> > the halo exchange in the shell matrix, would this be the same for Slepc?
> > 
> > That will be enough for the MatMult(). You would also have to use a SLEPc 
> > EPS that only needed MatMult().
> > 
> >   Thanks,
> > 
> >  Matt
> >  
> > Thanks,
> > Feng 
> > 
> > 
> > 
> > 
> > -- 
> > What most experimenters take for granted before they begin their 
> > experiments is infinitely more interesting than any results to which their 
> > experiments lead.
> > -- Norbert Wiener
> > 
> > https://www.cse.buffalo.edu/~knepley/



Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange

2022-09-21 Thread feng wang
Thanks Jose, I will try this and will come back to this thread if I have any 
issue.

Besides, for EPSGetEigenpair, I guess each rank gets its portion of the 
eigenvector, and I need to put them together afterwards?

Thanks,
Feng


From: Jose E. Roman 
Sent: 21 September 2022 12:34
To: feng wang 
Cc: Matthew Knepley ; petsc-users@mcs.anl.gov 

Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange

If you define the MATOP_CREATE_VECS operation in your shell matrix so that it 
creates a ghost vector, then all vectors within EPS will be ghost vectors, 
including those that are received as arguments of MatMult(). Not sure if this 
will work.

A simpler solution is that you store a ghost vector in the context of your 
shell matrix, and then in MatMult() you receive a regular parallel vector x, 
then update the ghost points using the auxiliary ghost vector, do the 
computation and store the result in the regular parallel vector y.

Jose


> El 21 sept 2022, a las 14:09, feng wang  escribió:
>
> Thanks for your reply.
>
> For GMRES, I create a ghost vector and give it to KSPSolve. For Slepc, it 
> only takes the shell matrix for EPSSetOperators. Suppose the shell matrix of 
> the eigensolver defines MatMult(Mat m ,Vec x, Vec y), how does it know Vec x 
> is  a ghost vector and how many ghost cells there are?
>
> Thanks,
> Feng
> From: Matthew Knepley 
> Sent: 21 September 2022 11:58
> To: feng wang 
> Cc: petsc-users@mcs.anl.gov 
> Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange
>
> On Wed, Sep 21, 2022 at 7:41 AM feng wang  wrote:
> Hello,
>
> I am using Slepc with a shell matrix. The sequential version seems working 
> and now I am trying to make it run in parallel.
>
> The partition of the domain is done, I am not sure how to do the halo 
> exchange in the shell matrix in Slepc. I have a parallel version of 
> matrix-free GMRES in my code with Petsc. I was using VecCreateGhostBlock to 
> create vector with ghost cells, and then used  VecGhostUpdateBegin/End for 
> the halo exchange in the shell matrix, would this be the same for Slepc?
>
> That will be enough for the MatMult(). You would also have to use a SLEPc EPS 
> that only needed MatMult().
>
>   Thanks,
>
>  Matt
>
> Thanks,
> Feng
>
>
>
>
> --
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/



Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange

2022-09-21 Thread Jose E. Roman
If you define the MATOP_CREATE_VECS operation in your shell matrix so that it 
creates a ghost vector, then all vectors within EPS will be ghost vectors, 
including those that are received as arguments of MatMult(). Not sure if this 
will work.

A simpler solution is that you store a ghost vector in the context of your 
shell matrix, and then in MatMult() you receive a regular parallel vector x, 
then update the ghost points using the auxiliary ghost vector, do the 
computation and store the result in the regular parallel vector y.

Jose


> El 21 sept 2022, a las 14:09, feng wang  escribió:
> 
> Thanks for your reply. 
> 
> For GMRES, I create a ghost vector and give it to KSPSolve. For Slepc, it 
> only takes the shell matrix for EPSSetOperators. Suppose the shell matrix of 
> the eigensolver defines MatMult(Mat m ,Vec x, Vec y), how does it know Vec x 
> is  a ghost vector and how many ghost cells there are?
> 
> Thanks,
> Feng
> From: Matthew Knepley 
> Sent: 21 September 2022 11:58
> To: feng wang 
> Cc: petsc-users@mcs.anl.gov 
> Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange
>  
> On Wed, Sep 21, 2022 at 7:41 AM feng wang  wrote:
> Hello,
> 
> I am using Slepc with a shell matrix. The sequential version seems working 
> and now I am trying to make it run in parallel.
> 
> The partition of the domain is done, I am not sure how to do the halo 
> exchange in the shell matrix in Slepc. I have a parallel version of 
> matrix-free GMRES in my code with Petsc. I was using VecCreateGhostBlock to 
> create vector with ghost cells, and then used  VecGhostUpdateBegin/End for 
> the halo exchange in the shell matrix, would this be the same for Slepc?
> 
> That will be enough for the MatMult(). You would also have to use a SLEPc EPS 
> that only needed MatMult().
> 
>   Thanks,
> 
>  Matt
>  
> Thanks,
> Feng 
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/



Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange

2022-09-21 Thread feng wang
Thanks for your reply.

For GMRES, I create a ghost vector and give it to KSPSolve. For Slepc, it only 
takes the shell matrix for EPSSetOperators. Suppose the shell matrix of the 
eigensolver defines MatMult(Mat m ,Vec x, Vec y), how does it know Vec x is  a 
ghost vector and how many ghost cells there are?

Thanks,
Feng

From: Matthew Knepley 
Sent: 21 September 2022 11:58
To: feng wang 
Cc: petsc-users@mcs.anl.gov 
Subject: Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange

On Wed, Sep 21, 2022 at 7:41 AM feng wang 
mailto:snails...@hotmail.com>> wrote:
Hello,

I am using Slepc with a shell matrix. The sequential version seems working and 
now I am trying to make it run in parallel.

The partition of the domain is done, I am not sure how to do the halo exchange 
in the shell matrix in Slepc. I have a parallel version of matrix-free GMRES in 
my code with Petsc. I was using VecCreateGhostBlock to create vector with ghost 
cells, and then used  VecGhostUpdateBegin/End for the halo exchange in the 
shell matrix, would this be the same for Slepc?

That will be enough for the MatMult(). You would also have to use a SLEPc EPS 
that only needed MatMult().

  Thanks,

 Matt

Thanks,
Feng




--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/


Re: [petsc-users] Slepc, shell matrix, parallel, halo exchange

2022-09-21 Thread Matthew Knepley
On Wed, Sep 21, 2022 at 7:41 AM feng wang  wrote:

> Hello,
>
> I am using Slepc with a shell matrix. The sequential version seems working
> and now I am trying to make it run in parallel.
>
> The partition of the domain is done, I am not sure how to do the halo
> exchange in the shell matrix in Slepc. I have a parallel version of
> matrix-free GMRES in my code with Petsc. I was using VecCreateGhostBlock to
> create vector with ghost cells, and then used  VecGhostUpdateBegin/End for
> the halo exchange in the shell matrix, would this be the same for Slepc?
>

That will be enough for the MatMult(). You would also have to use a SLEPc
EPS that only needed MatMult().

  Thanks,

 Matt


> Thanks,
> Feng
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


[petsc-users] Slepc, shell matrix, parallel, halo exchange

2022-09-21 Thread feng wang
Hello,

I am using Slepc with a shell matrix. The sequential version seems working and 
now I am trying to make it run in parallel.

The partition of the domain is done, I am not sure how to do the halo exchange 
in the shell matrix in Slepc. I have a parallel version of matrix-free GMRES in 
my code with Petsc. I was using VecCreateGhostBlock to create vector with ghost 
cells, and then used  VecGhostUpdateBegin/End for the halo exchange in the 
shell matrix, would this be the same for Slepc?

Thanks,
Feng




Re: [petsc-users] Problem solving Ax=b with rectangular matrix A

2022-09-21 Thread Pierre Jolivet
Yes, but you need to use a KSP that handles rectangular Mat, such as KSPLSQR 
(-ksp_type lsqr).
PCLU does not handle rectangular Pmat. The only PC that handle rectangular Pmat 
are PCQR, PCNONE.
If you supply the normal equations as the Pmat for LSQR, then you can use 
“standard” PC.
You can have a look at https://petsc.org/main/src/ksp/ksp/tutorials/ex27.c.html 
 that covers most of 
these cases.

Thanks,
Pierre

(sorry for the earlier answer sent wrongfully to petsc-maint, please discard 
the previous email)

> On 21 Sep 2022, at 10:03 AM, fujisan  wrote:
> 
> I'm trying to solve Ax=b with a sparse rectangular matrix A (of size 33x17 in 
> my test) using
> options '-ksp_type stcg -pc_type lu' on 1 or 2 cpus.
> 
> And I always get an error saying "Incompatible vector local lengths" (see 
> below).
> 
> Here is the relevant lines of my code:
> 
> program test
> ...
> ! Variable declarations
> 
> PetscCallA(PetscInitialize(PETSC_NULL_CHARACTER,ierr))
> 
> PetscCall(MatCreate(PETSC_COMM_WORLD,A,ierr))
> PetscCall(MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,m,n,ierr))
> PetscCall(MatSetType(A,MATMPIAIJ,ierr))
> PetscCall(MatSetFromOptions(A,ierr))
> PetscCall(MatSetUp(A,ierr))
> PetscCall(MatGetOwnershipRange(A,istart,iend,ierr))
> 
> do irow=istart,iend-1
> ... Reading from file ...
> PetscCall(MatSetValues(A,1,irow,nzv,col,val,ADD_VALUES,ierr))
> ...
> enddo
> 
> PetscCall(MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr))
> PetscCall(MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr))
> 
> ! Creating vectors x and b
> PetscCallA(MatCreateVecs(A,x,b,ierr))
> 
> ! Duplicating x in u.
> PetscCallA(VecDuplicate(x,u,ierr))
> 
> ! u is used to calculate b
> PetscCallA(VecSet(u,1.0,ierr))
> 
> PetscCallA(VecAssemblyBegin(u,ierr))
> PetscCallA(VecAssemblyEnd(u,ierr))
> 
> ! Calculating Au = b
> PetscCallA(MatMult(A,u,b,ierr)) ! A.u = b
> 
> PetscCallA(KSPSetType(ksp,KSPCG,ierr))
> 
> PetscCallA(KSPSetOperators(ksp,A,A,ierr))
> 
> PetscCallA(KSPSetFromOptions(ksp,ierr))
> 
> ! Solving Ax = b, x unknown
> PetscCallA(KSPSolve(ksp,b,x,ierr))
> 
> PetscCallA(VecDestroy(x,ierr))
> PetscCallA(VecDestroy(u,ierr))
> PetscCallA(VecDestroy(b,ierr))
> PetscCallA(MatDestroy(A,ierr))
> PetscCallA(KSPDestroy(ksp,ierr))
> 
> call PetscFinalize(ierr)
> end program
> 
> The code reads a sparse matrix from a binary file.
> I also output the sizes of matrix A and vectors b, x, u.
> They all seem consistent.
> 
> What am I doing wrong?
> Is it possible to solve Ax=b with A rectangular?
> 
> Thank you in advance for your help.
> Have a nice day.
> 
> Fuji
> 
>  Matrix size : m=  33  n=  17  cpu size:1
>  Size of matrix A  :   33  17
>  Size of vector b :   33
>  Size of vector x :   17
>  Size of vector u :   17
> [0]PETSC ERROR: - Error Message 
> --
> [0]PETSC ERROR: Arguments are incompatible
> [0]PETSC ERROR: Incompatible vector local lengths parameter # 1 local size 33 
> != parameter # 2 local size 17
> [0]PETSC ERROR: See https://petsc.org/release/faq/ 
>  for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.17.4-1341-g91b2b62a00  GIT 
> Date: 2022-09-15 19:26:07 +
> [0]PETSC ERROR: ./bin/solve on a x86_64 named master by fujisan Tue Sep 20 
> 16:56:37 2022
> [0]PETSC ERROR: Configure options --with-petsc-arch=x86_64 --COPTFLAGS="-g 
> -O3" --FOPTFLAGS="-g -O3" --CXXOPTFLAGS="-g -O3" --with-debugging=0 
> --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort 
> --with-single-library=1 --with-mpiexec=mpiexec --with-precision=double 
> --with-fortran-interfaces=1 --with-make=1 --with-mpi=1 --with-mpi-compilers=1 
> --download-fblaslapack=0 --download-hypre=1 --download-cmake=0 --with-cmake=1 
> --download-metis=1 --download-parmetis=1 --download-ptscotch=0 
> --download-suitesparse=1 --download-triangle=1 --download-superlu=1 
> --download-superlu_dist=1 --download-scalapack=1 --download-mumps=1 
> --download-elemental=1 --download-spai=0 --download-parms=1 --download-moab=1 
> --download-chaco=0 --download-fftw=1 --with-petsc4py=1 --download-mpi4py=1 
> --download-saws --download-concurrencykit=1 --download-revolve=1 
> --download-cams=1 --download-p4est=0 --with-zlib=1 --download-mfem=1 
> --download-glvis=0 --with-opengl=0 --download-libpng=1 --download-libjpeg=1 
> --download-slepc=1 --download-hpddm=1 --download-bamg=1 --download-mmg=0 
> --download-parmmg=0 --download-htool=1 --download-egads=0 
> --download-opencascade=0 PETSC_ARCH=x86_64
> [0]PETSC ERROR: #1 VecCopy() at 
> /data/softs/petsc/src/vec/vec/interface/vector.c:1607
> [0]PETSC ERROR: #2 KSPSolve_BiCG() at 
> /data/softs/petsc/src/ksp/ksp/impls/bicg/bicg.c:40

[petsc-users] Problem solving Ax=b with rectangular matrix A

2022-09-21 Thread fujisan
I'm trying to solve Ax=b with a sparse rectangular matrix A (of size 33x17
in my test) using
options '-ksp_type stcg -pc_type lu' on 1 or 2 cpus.

And I always get an error saying "Incompatible vector local lengths" (see
below).

Here is the relevant lines of my code:

program test
...
! Variable declarations

PetscCallA(PetscInitialize(PETSC_NULL_CHARACTER,ierr))

PetscCall(MatCreate(PETSC_COMM_WORLD,A,ierr))
PetscCall(MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,m,n,ierr))
PetscCall(MatSetType(A,MATMPIAIJ,ierr))
PetscCall(MatSetFromOptions(A,ierr))
PetscCall(MatSetUp(A,ierr))
PetscCall(MatGetOwnershipRange(A,istart,iend,ierr))

do irow=istart,iend-1
... Reading from file ...
PetscCall(MatSetValues(A,1,irow,nzv,col,val,ADD_VALUES,ierr))
...
enddo

PetscCall(MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr))
PetscCall(MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr))

! Creating vectors x and b
PetscCallA(MatCreateVecs(A,x,b,ierr))

! Duplicating x in u.
PetscCallA(VecDuplicate(x,u,ierr))

! u is used to calculate b
PetscCallA(VecSet(u,1.0,ierr))

PetscCallA(VecAssemblyBegin(u,ierr))
PetscCallA(VecAssemblyEnd(u,ierr))

! Calculating Au = b
PetscCallA(MatMult(A,u,b,ierr)) ! A.u = b

PetscCallA(KSPSetType(ksp,KSPCG,ierr))

PetscCallA(KSPSetOperators(ksp,A,A,ierr))

PetscCallA(KSPSetFromOptions(ksp,ierr))

! Solving Ax = b, x unknown
PetscCallA(KSPSolve(ksp,b,x,ierr))

PetscCallA(VecDestroy(x,ierr))
PetscCallA(VecDestroy(u,ierr))
PetscCallA(VecDestroy(b,ierr))
PetscCallA(MatDestroy(A,ierr))
PetscCallA(KSPDestroy(ksp,ierr))

call PetscFinalize(ierr)
end program

The code reads a sparse matrix from a binary file.
I also output the sizes of matrix A and vectors b, x, u.
They all seem consistent.

What am I doing wrong?
Is it possible to solve Ax=b with A rectangular?

Thank you in advance for your help.
Have a nice day.

Fuji

 Matrix size : m=  33  n=  17  cpu size:1
 Size of matrix A  :   33  17
 Size of vector b :   33
 Size of vector x :   17
 Size of vector u :   17
[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: Arguments are incompatible
[0]PETSC ERROR: Incompatible vector local lengths parameter # 1 local size
33 != parameter # 2 local size 17
[0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.17.4-1341-g91b2b62a00
GIT Date: 2022-09-15 19:26:07 +
[0]PETSC ERROR: ./bin/solve on a x86_64 named master by fujisan Tue Sep 20
16:56:37 2022
[0]PETSC ERROR: Configure options --with-petsc-arch=x86_64 --COPTFLAGS="-g
-O3" --FOPTFLAGS="-g -O3" --CXXOPTFLAGS="-g -O3" --with-debugging=0
--with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort
--with-single-library=1 --with-mpiexec=mpiexec --with-precision=double
--with-fortran-interfaces=1 --with-make=1 --with-mpi=1
--with-mpi-compilers=1 --download-fblaslapack=0 --download-hypre=1
--download-cmake=0 --with-cmake=1 --download-metis=1 --download-parmetis=1
--download-ptscotch=0 --download-suitesparse=1 --download-triangle=1
--download-superlu=1 --download-superlu_dist=1 --download-scalapack=1
--download-mumps=1 --download-elemental=1 --download-spai=0
--download-parms=1 --download-moab=1 --download-chaco=0 --download-fftw=1
--with-petsc4py=1 --download-mpi4py=1 --download-saws
--download-concurrencykit=1 --download-revolve=1 --download-cams=1
--download-p4est=0 --with-zlib=1 --download-mfem=1 --download-glvis=0
--with-opengl=0 --download-libpng=1 --download-libjpeg=1 --download-slepc=1
--download-hpddm=1 --download-bamg=1 --download-mmg=0 --download-parmmg=0
--download-htool=1 --download-egads=0 --download-opencascade=0
PETSC_ARCH=x86_64
[0]PETSC ERROR: #1 VecCopy() at
/data/softs/petsc/src/vec/vec/interface/vector.c:1607
[0]PETSC ERROR: #2 KSPSolve_BiCG() at
/data/softs/petsc/src/ksp/ksp/impls/bicg/bicg.c:40
[0]PETSC ERROR: #3 KSPSolve_Private() at
/data/softs/petsc/src/ksp/ksp/interface/itfunc.c:877
[0]PETSC ERROR: #4 KSPSolve() at
/data/softs/petsc/src/ksp/ksp/interface/itfunc.c:1048
[0]PETSC ERROR: #5 solve.F90:218
Abort(75) on node 0 (rank 0 in comm 16): application called
MPI_Abort(MPI_COMM_SELF, 75) - process 0