Re: [petsc-users] [petsc-maint] DMSwarm on multiple processors

2023-10-26 Thread Matthew Knepley
Okay, there were a few problems:

1) You overwrote the bounds on string loc_grid_gen[]

2) You destroyed the coordinate DA

I fixed these and it runs for me fine on several processes. I am including
my revised source since I check a lot more error values. I converted it to
C because that is easier for me, although C has a problem with your sqrt()
in a compile-time constant.

  Thanks,

 Matt

On Thu, Oct 26, 2023 at 10:59 AM Barry Smith  wrote:

>
>Please run with -malloc_debug option or even better run under Valgrind
> https://petsc.org/release/faq/
>
>
>
> On Oct 26, 2023, at 10:35 AM, Joauma Marichal via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> Hello,
>
> Here is a very simple version where I have issues.
>
> Which I run as follows:
>
> cd Grid_generation
> make clean
> make all
> ./grid_generation
> cd ..
> make clean
> make all
> ./cobpor # on 1 proc
> # OR
> mpiexec ./cobpor -ksp_type cg -pc_type pfmg -dm_mat_type hyprestruct
> -pc_pfmg_skip_relax 1 -pc_pfmg_rap_time non-Galerkin # on multiple procs
>
> The error that I get is the following:
> munmap_chunk(): invalid pointer
> [cns266:2552391] *** Process received signal ***
> [cns266:2552391] Signal: Aborted (6)
> [cns266:2552391] Signal code:  (-6)
> [cns266:2552391] [ 0] /lib64/libc.so.6(+0x4eb20)[0x7fd7fd194b20]
> [cns266:2552391] [ 1] /lib64/libc.so.6(gsignal+0x10f)[0x7fd7fd194a9f]
> [cns266:2552391] [ 2] /lib64/libc.so.6(abort+0x127)[0x7fd7fd167e05]
> [cns266:2552391] [ 3] /lib64/libc.so.6(+0x91037)[0x7fd7fd1d7037]
> [cns266:2552391] [ 4] /lib64/libc.so.6(+0x9819c)[0x7fd7fd1de19c]
> [cns266:2552391] [ 5] /lib64/libc.so.6(+0x9844c)[0x7fd7fd1de44c]
> [cns266:2552391] [ 6] /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/
> libpetsc.so.3.019(PetscFreeAlign+0xe)[0x7fd7fe63d50e]
> [cns266:2552391] [ 7] /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/
> libpetsc.so.3.019(DMSetMatType+0x3d)[0x7fd7feab87ad]
> [cns266:2552391] [ 8] /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/
> libpetsc.so.3.019(DMSetFromOptions+0x109)[0x7fd7feab8b59]
> [cns266:2552391] [ 9] ./cobpor[0x402df9]
> [cns266:2552391] [10] /lib64/libc.so
> .6(__libc_start_main+0xf3)[0x7fd7fd180cf3]
> [cns266:2552391] [11] ./cobpor[0x40304e]
> [cns266:2552391] *** End of error message ***
>
>
> Thanks a lot for your help.
>
> Best regards,
>
> Joauma
>
>
>
>
> *De : *Matthew Knepley 
> *Date : *mercredi, 25 octobre 2023 à 14:45
> *À : *Joauma Marichal 
> *Cc : *petsc-ma...@mcs.anl.gov ,
> petsc-users@mcs.anl.gov 
> *Objet : *Re: [petsc-maint] DMSwarm on multiple processors
> On Wed, Oct 25, 2023 at 8:32 AM Joauma Marichal via petsc-maint <
> petsc-ma...@mcs.anl.gov> wrote:
>
> Hello,
>
> I am using the DMSwarm library in some Eulerian-Lagrangian approach to
> have vapor bubbles in water.
> I have obtained nice results recently and wanted to perform bigger
> simulations. Unfortunately, when I increase the number of processors used
> to run the simulation, I get the following error:
>
>
> free(): invalid size
>
> [cns136:590327] *** Process received signal ***
>
> [cns136:590327] Signal: Aborted (6)
>
> [cns136:590327] Signal code:  (-6)
>
> [cns136:590327] [ 0] /lib64/libc.so.6(+0x4eb20)[0x7f56cd4c9b20]
>
> [cns136:590327] [ 1] /lib64/libc.so.6(gsignal+0x10f)[0x7f56cd4c9a9f]
>
> [cns136:590327] [ 2] /lib64/libc.so.6(abort+0x127)[0x7f56cd49ce05]
>
> [cns136:590327] [ 3] /lib64/libc.so.6(+0x91037)[0x7f56cd50c037]
>
> [cns136:590327] [ 4] /lib64/libc.so.6(+0x9819c)[0x7f56cd51319c]
>
> [cns136:590327] [ 5] /lib64/libc.so.6(+0x99aac)[0x7f56cd514aac]
>
> [cns136:590327] [ 6] /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/
> libpetsc.so.3.019(PetscSFSetUpRanks+0x4c4)[0x7f56cea71e64]
>
> [cns136:590327] [ 7] /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/
> libpetsc.so.3.019(+0x841642)[0x7f56cea83642]
>
> [cns136:590327] [ 8] /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/
> libpetsc.so.3.019(PetscSFSetUp+0x9e)[0x7f56cea7043e]
>
> [cns136:590327] [ 9] /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/
> libpetsc.so.3.019(VecScatterCreate+0x164e)[0x7f56cea7bbde]
>
> [cns136:590327] [10] /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/
> libpetsc.so.3.019(DMSetUp_DA_3D+0x3e38)[0x7f56cee84dd8]
>
> [cns136:590327] [11] /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/
> libpetsc.so.3.019(DMSetUp_DA+0xd8)[0x7f56cee9b448]
>
> [cns136:590327] [12] /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/
> libpetsc.so.3.019(DMSetUp+0x20)[0x7f56cededa20]
>
> [cns136:590327] [13] ./cobpor[0x4418dc]
>
> [cns136:590327] [14] ./cobpor[0x408b63]
>
> [cns136:590327] [15] /lib64/libc.so
> .6(__libc_start_main+0xf3)[0x7f56cd4b5cf3]
>
> [cns136:590327] [16] ./cobpor[0x40bdee]
>
> [cns136:590327] *** End of error message ***
>
> --
>
> Primary job  terminated normally, but 1 process returned
>
> a non-zero exit code. Per user-direction, the job has been aborted.
>
> 

Re: [petsc-users] Copying PETSc Objects Across MPI Communicators

2023-10-26 Thread Matthew Knepley
On Wed, Oct 25, 2023 at 11:55 PM Damyn Chipman <
damynchip...@u.boisestate.edu> wrote:

> Great thanks, that seemed to work well. This is something my algorithm
> will do fairly often (“elevating” a node’s communicator to a communicator
> that includes siblings). The matrices formed are dense but low rank. With
> MatCreateSubMatrix, it appears I do a lot of copying from one Mat to
> another. Is there a way to do it with array copying or pointer movement
> instead of copying entries?
>

We could make a fast path for dense that avoids MatSetValues(). Can you
make an issue for this? The number one thing that would make this faster is
to contribute a small test. Then we could run it continually when putting
in the fast path to make sure we are preserving correctness.

  Thanks,

Matt


> -Damyn
>
> On Oct 24, 2023, at 9:51 PM, Jed Brown  wrote:
>
> You can place it in a parallel Mat (that has rows or columns on only one
> rank or a subset of ranks) and then MatCreateSubMatrix with all new
> rows/columns on a different rank or subset of ranks.
>
> That said, you usually have a function that assembles the matrix and you
> can just call that on the new communicator.
>
> Damyn Chipman  writes:
>
> Hi PETSc developers,
>
> In short, my question is this: Does PETSc provide a way to move or copy an
> object (say a Mat) from one communicator to another?
>
> The more detailed scenario is this: I’m working on a linear algebra solver
> on quadtree meshes (i.e., p4est). I use communicator subsets in order to
> facilitate communication between siblings or nearby neighbors. When
> performing linear algebra across siblings (a group of 4), I need to copy a
> node’s data (i.e., a Mat object) from a sibling’s communicator to the
> communicator that includes the four siblings. From what I can tell, I can
> only copy a PETSc object onto the same communicator.
>
> My current approach will be to copy the raw data from the Mat on one
> communicator to a new Mat on the new communicator, but I wanted to see if
> there is a more “elegant” approach within PETSc.
>
> Thanks in advance,
>
> Damyn Chipman
> Boise State University
> PhD Candidate
> Computational Sciences and Engineering
> damynchip...@u.boisestate.edu
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] [petsc-maint] DMSwarm on multiple processors

2023-10-26 Thread Barry Smith

   Please run with -malloc_debug option or even better run under Valgrind 
https://petsc.org/release/faq/



> On Oct 26, 2023, at 10:35 AM, Joauma Marichal via petsc-users 
>  wrote:
> 
> Hello, 
>  
> Here is a very simple version where I have issues.
>  
> Which I run as follows:
>  
> cd Grid_generation 
> make clean 
> make all
> ./grid_generation 
> cd ..
> make clean 
> make all
> ./cobpor # on 1 proc
> # OR
> mpiexec ./cobpor -ksp_type cg -pc_type pfmg -dm_mat_type hyprestruct 
> -pc_pfmg_skip_relax 1 -pc_pfmg_rap_time non-Galerkin # on multiple procs
>  
> The error that I get is the following:
> munmap_chunk(): invalid pointer
> [cns266:2552391] *** Process received signal ***
> [cns266:2552391] Signal: Aborted (6)
> [cns266:2552391] Signal code:  (-6)
> [cns266:2552391] [ 0] /lib64/libc.so 
> .6(+0x4eb20)[0x7fd7fd194b20]
> [cns266:2552391] [ 1] /lib64/libc.so 
> .6(gsignal+0x10f)[0x7fd7fd194a9f]
> [cns266:2552391] [ 2] /lib64/libc.so 
> .6(abort+0x127)[0x7fd7fd167e05]
> [cns266:2552391] [ 3] /lib64/libc.so 
> .6(+0x91037)[0x7fd7fd1d7037]
> [cns266:2552391] [ 4] /lib64/libc.so 
> .6(+0x9819c)[0x7fd7fd1de19c]
> [cns266:2552391] [ 5] /lib64/libc.so 
> .6(+0x9844c)[0x7fd7fd1de44c]
> [cns266:2552391] [ 6] 
> /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so 
> .3.019(PetscFreeAlign+0xe)[0x7fd7fe63d50e]
> [cns266:2552391] [ 7] 
> /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so 
> .3.019(DMSetMatType+0x3d)[0x7fd7feab87ad]
> [cns266:2552391] [ 8] 
> /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so 
> .3.019(DMSetFromOptions+0x109)[0x7fd7feab8b59]
> [cns266:2552391] [ 9] ./cobpor[0x402df9]
> [cns266:2552391] [10] /lib64/libc.so 
> .6(__libc_start_main+0xf3)[0x7fd7fd180cf3]
> [cns266:2552391] [11] ./cobpor[0x40304e]
> [cns266:2552391] *** End of error message ***
>  
>  
> Thanks a lot for your help. 
>  
> Best regards, 
>  
> Joauma
>  
>  
>  
> De : Matthew Knepley mailto:knep...@gmail.com>>
> Date : mercredi, 25 octobre 2023 à 14:45
> À : Joauma Marichal  >
> Cc : petsc-ma...@mcs.anl.gov  
> mailto:petsc-ma...@mcs.anl.gov>>, 
> petsc-users@mcs.anl.gov  
> mailto:petsc-users@mcs.anl.gov>>
> Objet : Re: [petsc-maint] DMSwarm on multiple processors
> 
> On Wed, Oct 25, 2023 at 8:32 AM Joauma Marichal via petsc-maint 
> mailto:petsc-ma...@mcs.anl.gov>> wrote:
> Hello, 
>  
> I am using the DMSwarm library in some Eulerian-Lagrangian approach to have 
> vapor bubbles in water. 
> I have obtained nice results recently and wanted to perform bigger 
> simulations. Unfortunately, when I increase the number of processors used to 
> run the simulation, I get the following error:
>  
> free(): invalid size
> 
> [cns136:590327] *** Process received signal ***
> 
> [cns136:590327] Signal: Aborted (6)
> 
> [cns136:590327] Signal code:  (-6)
> 
> [cns136:590327] [ 0] /lib64/libc.so 
> .6(+0x4eb20)[0x7f56cd4c9b20]
> 
> [cns136:590327] [ 1] /lib64/libc.so 
> .6(gsignal+0x10f)[0x7f56cd4c9a9f]
> 
> [cns136:590327] [ 2] /lib64/libc.so 
> .6(abort+0x127)[0x7f56cd49ce05]
> 
> [cns136:590327] [ 3] /lib64/libc.so 
> .6(+0x91037)[0x7f56cd50c037]
> 
> [cns136:590327] [ 4] /lib64/libc.so 
> .6(+0x9819c)[0x7f56cd51319c]
> 
> [cns136:590327] [ 5] /lib64/libc.so 
> .6(+0x99aac)[0x7f56cd514aac]
> 
> [cns136:590327] [ 6] 
> /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so 
> .3.019(PetscSFSetUpRanks+0x4c4)[0x7f56cea71e64]
> 
> [cns136:590327] [ 7] 
> /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so 
> .3.019(+0x841642)[0x7f56cea83642]
> 
> [cns136:590327] [ 8] 
> /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so 
> .3.019(PetscSFSetUp+0x9e)[0x7f56cea7043e]
> 
> [cns136:590327] [ 9] 
> /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so 
> .3.019(VecScatterCreate+0x164e)[0x7f56cea7bbde]
> 
> [cns136:590327] [10] 
> /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so 
> .3.019(DMSetUp_DA_3D+0x3e38)[0x7f56cee84dd8]
> 
> [cns136:590327] [11] 
> /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so 
> .3.019(DMSetUp_DA+0xd8)[0x7f56cee9b448]
> 
> [cns136:590327] [12] 
> /gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so 
> .3.019(DMSetUp+0x20)[0x7f56cededa20]
> 
> [cns136:590327] [13] ./cobpor[0x4418dc]
> 
> [cns136:590327] [14] ./cobpor[0x408b63]
> 
> [cns136:590327] [15] /lib64/libc.so 
> .6(__libc_start_main+0xf3)[0x7f56cd4b5cf3]
> 
> [cns136:590327] 

[petsc-users] Joauma Marichal a partagé le dossier « marha » avec vous

2023-10-26 Thread Joauma Marichal via petsc-users
[Partager l'image]

Joauma Marichal a partagé un dossier avec vous

Joauma Marichal a partagé ce dossier avec vous.


[icon]  marha
[permission globe icon] Ce lien ne fonctionne que pour les 
destinataires directs de ce message.
Ouvrir 

[Microsoft logo][cid:faf45f49-2eb0-45c1-831d-d87e9a739e5c]
Déclaration de confidentialité 



Re: [petsc-users] [petsc-maint] DMSwarm on multiple processors

2023-10-26 Thread Joauma Marichal via petsc-users
Hello,



Here is a very simple version where I have issues.



Which I run as follows:



cd Grid_generation

make clean

make all

./grid_generation

cd ..

make clean

make all

./cobpor # on 1 proc

# OR

mpiexec ./cobpor -ksp_type cg -pc_type pfmg -dm_mat_type hyprestruct 
-pc_pfmg_skip_relax 1 -pc_pfmg_rap_time non-Galerkin # on multiple procs


The error that I get is the following:

munmap_chunk(): invalid pointer

[cns266:2552391] *** Process received signal ***

[cns266:2552391] Signal: Aborted (6)

[cns266:2552391] Signal code:  (-6)

[cns266:2552391] [ 0] /lib64/libc.so.6(+0x4eb20)[0x7fd7fd194b20]

[cns266:2552391] [ 1] /lib64/libc.so.6(gsignal+0x10f)[0x7fd7fd194a9f]

[cns266:2552391] [ 2] /lib64/libc.so.6(abort+0x127)[0x7fd7fd167e05]

[cns266:2552391] [ 3] /lib64/libc.so.6(+0x91037)[0x7fd7fd1d7037]

[cns266:2552391] [ 4] /lib64/libc.so.6(+0x9819c)[0x7fd7fd1de19c]

[cns266:2552391] [ 5] /lib64/libc.so.6(+0x9844c)[0x7fd7fd1de44c]

[cns266:2552391] [ 6] 
/gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so.3.019(PetscFreeAlign+0xe)[0x7fd7fe63d50e]

[cns266:2552391] [ 7] 
/gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so.3.019(DMSetMatType+0x3d)[0x7fd7feab87ad]

[cns266:2552391] [ 8] 
/gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so.3.019(DMSetFromOptions+0x109)[0x7fd7feab8b59]

[cns266:2552391] [ 9] ./cobpor[0x402df9]

[cns266:2552391] [10] /lib64/libc.so.6(__libc_start_main+0xf3)[0x7fd7fd180cf3]

[cns266:2552391] [11] ./cobpor[0x40304e]

[cns266:2552391] *** End of error message ***





Thanks a lot for your help.



Best regards,



Joauma




De : Matthew Knepley 
Date : mercredi, 25 octobre 2023 à 14:45
À : Joauma Marichal 
Cc : petsc-ma...@mcs.anl.gov , petsc-users@mcs.anl.gov 

Objet : Re: [petsc-maint] DMSwarm on multiple processors
On Wed, Oct 25, 2023 at 8:32 AM Joauma Marichal via petsc-maint 
mailto:petsc-ma...@mcs.anl.gov>> wrote:
Hello,

I am using the DMSwarm library in some Eulerian-Lagrangian approach to have 
vapor bubbles in water.
I have obtained nice results recently and wanted to perform bigger simulations. 
Unfortunately, when I increase the number of processors used to run the 
simulation, I get the following error:


free(): invalid size

[cns136:590327] *** Process received signal ***

[cns136:590327] Signal: Aborted (6)

[cns136:590327] Signal code:  (-6)

[cns136:590327] [ 0] /lib64/libc.so.6(+0x4eb20)[0x7f56cd4c9b20]

[cns136:590327] [ 1] /lib64/libc.so.6(gsignal+0x10f)[0x7f56cd4c9a9f]

[cns136:590327] [ 2] /lib64/libc.so.6(abort+0x127)[0x7f56cd49ce05]

[cns136:590327] [ 3] /lib64/libc.so.6(+0x91037)[0x7f56cd50c037]

[cns136:590327] [ 4] /lib64/libc.so.6(+0x9819c)[0x7f56cd51319c]

[cns136:590327] [ 5] /lib64/libc.so.6(+0x99aac)[0x7f56cd514aac]

[cns136:590327] [ 6] 
/gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so.3.019(PetscSFSetUpRanks+0x4c4)[0x7f56cea71e64]

[cns136:590327] [ 7] 
/gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so.3.019(+0x841642)[0x7f56cea83642]

[cns136:590327] [ 8] 
/gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so.3.019(PetscSFSetUp+0x9e)[0x7f56cea7043e]

[cns136:590327] [ 9] 
/gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so.3.019(VecScatterCreate+0x164e)[0x7f56cea7bbde]

[cns136:590327] [10] 
/gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so.3.019(DMSetUp_DA_3D+0x3e38)[0x7f56cee84dd8]

[cns136:590327] [11] 
/gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so.3.019(DMSetUp_DA+0xd8)[0x7f56cee9b448]

[cns136:590327] [12] 
/gpfs/home/acad/ucl-tfl/marichaj/marha/lib_petsc/lib/libpetsc.so.3.019(DMSetUp+0x20)[0x7f56cededa20]

[cns136:590327] [13] ./cobpor[0x4418dc]

[cns136:590327] [14] ./cobpor[0x408b63]

[cns136:590327] [15] /lib64/libc.so.6(__libc_start_main+0xf3)[0x7f56cd4b5cf3]

[cns136:590327] [16] ./cobpor[0x40bdee]

[cns136:590327] *** End of error message ***

--

Primary job  terminated normally, but 1 process returned

a non-zero exit code. Per user-direction, the job has been aborted.

--

--

mpiexec noticed that process rank 84 with PID 590327 on node cns136 exited on 
signal 6 (Aborted).

--

When I reduce the number of processors the error disappears and when I run my 
code without the vapor bubbles it also works.
The problem seems to take place at this moment:

DMCreate(PETSC_COMM_WORLD,swarm);
DMSetType(*swarm,DMSWARM);
DMSetDimension(*swarm,3);
DMSwarmSetType(*swarm,DMSWARM_PIC);
DMSwarmSetCellDM(*swarm,*dmcell);


Thanks a lot for your help.

Things that would help us track this down:

1) The smallest example where it fails

2) The smallest number of processes where it fails

3) A stack trace of the 

Re: [petsc-users] alternative for MatCreateSeqAIJWithArrays

2023-10-26 Thread Barry Smith

   Is your code sequential (with possibly OpenMP) or MPI parallel? Do you plan 
to make your part of the code MPI parallel?

If it is sequential or OpenMP parallel you might consider using the new 
feature https://petsc.org/release/manualpages/PC/PCMPI/#pcmpi Depending on your 
system it is an easy way to run linear solver in parallel while the code is 
sequential and can give some reasonable speedup.

> On Oct 26, 2023, at 8:58 AM, Qiyue Lu  wrote:
> 
> Hello,
> I am trying to incorporate PETSc as a linear solver to compute Ax=b in my 
> code. Currently, the sequential version works. 
> 1) I have the global matrix A in CSR format and they are stored in three 
> 1-dimensional arrays: row_ptr[ ], col_idx[ ], values[ ], and I am using 
> MatCreateSeqAIJWithArrays to get the PETSc format matrix. This works. 
> 2) I am trying to use multicores, and when I use "srun -n 6", I got the error 
> Comm must be of size 1 from the MatCreateSeqAIJWithArrays. Saying I cannot 
> use SEQ function in a parallel context. 
> 3) I don't think MatCreateMPIAIJWithArrays and MatMPIAIJSetPreallocationCSR 
> are good options for me, since I already have the global matrix as a whole. 
> 
> I wonder, from the global CSR format data, how can I reach the PETSc format 
> matrix for parallel KSP computation. Are the MatSetValue, MatSetValues what I 
> need?
> 
> Thanks,
> Qiyue Lu



Re: [petsc-users] alternative for MatCreateSeqAIJWithArrays

2023-10-26 Thread Junchao Zhang
On Thu, Oct 26, 2023 at 8:21 AM Qiyue Lu  wrote:

> Hello,
> I am trying to incorporate PETSc as a linear solver to compute Ax=b in my
> code. Currently, the sequential version works.
> 1) I have the global matrix A in CSR format and they are stored in three
> 1-dimensional arrays: row_ptr[ ], col_idx[ ], values[ ], and I am using
> MatCreateSeqAIJWithArrays to get the PETSc format matrix. This works.
> 2) I am trying to use multicores, and when I use "srun -n 6", I got the
> error *Comm must be of size 1* from the MatCreateSeqAIJWithArrays. Saying
> I cannot use SEQ function in a parallel context.
> 3) I don't think MatCreateMPIAIJWithArrays and
> MatMPIAIJSetPreallocationCSR are good options for me, since I already have
> the global matrix as a whole.
>
> I wonder, from the global CSR format data, how can I reach the PETSc
> format matrix for parallel KSP computation. Are the MatSetValue,
> MatSetValues what I need?
>
Yes, MatSetValues on each row.   Your matrix data is originally on one
process, which is not efficient.  You could try to distribute it at the
beginning.


>
> Thanks,
> Qiyue Lu
>


[petsc-users] alternative for MatCreateSeqAIJWithArrays

2023-10-26 Thread Qiyue Lu
Hello,
I am trying to incorporate PETSc as a linear solver to compute Ax=b in my
code. Currently, the sequential version works.
1) I have the global matrix A in CSR format and they are stored in three
1-dimensional arrays: row_ptr[ ], col_idx[ ], values[ ], and I am using
MatCreateSeqAIJWithArrays to get the PETSc format matrix. This works.
2) I am trying to use multicores, and when I use "srun -n 6", I got the
error *Comm must be of size 1* from the MatCreateSeqAIJWithArrays. Saying I
cannot use SEQ function in a parallel context.
3) I don't think MatCreateMPIAIJWithArrays and MatMPIAIJSetPreallocationCSR
are good options for me, since I already have the global matrix as a whole.

I wonder, from the global CSR format data, how can I reach the PETSc format
matrix for parallel KSP computation. Are the MatSetValue, MatSetValues what
I need?

Thanks,
Qiyue Lu