Re: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix?

2023-06-20 Thread Matthew Knepley
On Tue, Jun 20, 2023 at 2:02 PM Diego Magela Lemos 
wrote:

> So... what do I need to do, please?
> Why am I getting wrong results when solving the linear system if the
> matrix is filled in with MatSetPreallocationCOO and MatSetValuesCOO?
>

It appears that you have _all_ processes submit _all_ triples (i, j, v).
Each triple can only be submitted by a single process. You can fix this in
many ways. For example, an easy but suboptimal way is just to have process
0 submit them all, and all other processes submit nothing.

  Thanks,

  Matt


> Em ter., 20 de jun. de 2023 às 14:56, Jed Brown 
> escreveu:
>
>> Matthew Knepley  writes:
>>
>> >> The matrix entries are multiplied by 2, that is, the number of
>> processes
>> >> used to execute the code.
>> >>
>> >
>> > No. This was mostly intended for GPUs, where there is 1 process. If you
>> > want to use multiple MPI processes, then each process can only introduce
>> > some disjoint subset of the values. This is also how MatSetValues()
>> works,
>> > but it might not be as obvious.
>>
>> They need not be disjoint, just sum to the expected values. This
>> interface is very convenient for FE and FV methods. MatSetValues with
>> ADD_VALUES has similar semantics without the intermediate storage, but it
>> forces you to submit one element matrix at a time. Classic parallelism
>> granularity versus memory use tradeoff with MatSetValuesCOO being a clear
>> win on GPUs and more nuanced for CPUs.
>>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix?

2023-06-20 Thread Diego Magela Lemos via petsc-users
So... what do I need to do, please?
Why am I getting wrong results when solving the linear system if the matrix
is filled in with MatSetPreallocationCOO and MatSetValuesCOO?

Em ter., 20 de jun. de 2023 às 14:56, Jed Brown  escreveu:

> Matthew Knepley  writes:
>
> >> The matrix entries are multiplied by 2, that is, the number of processes
> >> used to execute the code.
> >>
> >
> > No. This was mostly intended for GPUs, where there is 1 process. If you
> > want to use multiple MPI processes, then each process can only introduce
> > some disjoint subset of the values. This is also how MatSetValues()
> works,
> > but it might not be as obvious.
>
> They need not be disjoint, just sum to the expected values. This interface
> is very convenient for FE and FV methods. MatSetValues with ADD_VALUES has
> similar semantics without the intermediate storage, but it forces you to
> submit one element matrix at a time. Classic parallelism granularity versus
> memory use tradeoff with MatSetValuesCOO being a clear win on GPUs and more
> nuanced for CPUs.
>


Re: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix?

2023-06-20 Thread Jed Brown
Matthew Knepley  writes:

>> The matrix entries are multiplied by 2, that is, the number of processes
>> used to execute the code.
>>
>
> No. This was mostly intended for GPUs, where there is 1 process. If you
> want to use multiple MPI processes, then each process can only introduce
> some disjoint subset of the values. This is also how MatSetValues() works,
> but it might not be as obvious.

They need not be disjoint, just sum to the expected values. This interface is 
very convenient for FE and FV methods. MatSetValues with ADD_VALUES has similar 
semantics without the intermediate storage, but it forces you to submit one 
element matrix at a time. Classic parallelism granularity versus memory use 
tradeoff with MatSetValuesCOO being a clear win on GPUs and more nuanced for 
CPUs.


Re: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix?

2023-06-20 Thread Jed Brown
You should partition the entries so each entry is submitted by only one 
process. Note that duplicate entries (on the same or different proceses) are 
summed as you've seen. For example, in finite elements, it's typical to 
partition the elements and each process submits entries from its elements. 

Diego Magela Lemos via petsc-users  writes:

> Using all recommended approaches it worked!
> Thank you all in advance.
>
> Now, I'm facing problems when solving a linear system using each approach.
>
> *COO approach*
>
> Using MatSetPreallocationCOO and then MatSetValuesCOO, I'm able to fill in
> the matrix when running with 1 MPI process.
> But, if I run with more than one MPI process, the values entries are
> multiplied by the number of MPI processes being used.
> Is this behavior correct?
>
> Consider the following code:
>
> // fill_in_matrix.cc
>
> static char help[] = "Fill in a parallel COO format sparse matrix.";
>
> #include 
> #include 
>
> int main(int argc, char **args)
> {
> std::vector coo_i{0, 0, 1, 2, 3, 4};
> std::vector coo_j{0, 0, 1, 2, 3, 4};
> std::vector coo_v{2, -1, 2, 3, 4, 5};
>
> Mat A;
>
> PetscInt size = 5;
>
> PetscCall(PetscInitialize(, , NULL, help));
>
> // Create matrix
> PetscCall(MatCreate(PETSC_COMM_WORLD, ));
> PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size));
> PetscCall(MatSetFromOptions(A));
>
> // Populate matrix
> PetscCall(MatSetPreallocationCOO(A, coo_v.size(), coo_i.data(),
> coo_j.data()));
> PetscCall(MatSetValuesCOO(A, coo_v.data(), ADD_VALUES));
>
> // View matrix
> PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD));
>
> PetscCall(MatDestroy());
>
> PetscCall(PetscFinalize());
> return 0;
> }
>
> When running with petscmpiexec -n 1 ./fill_in_matrix, I got
>
> Mat Object: 1 MPI process
>>   type: seqaij
>> row 0: (0, 1.)
>> row 1: (1, 2.)
>> row 2: (2, 3.)
>> row 3: (3, 4.)
>> row 4: (4, 5.)
>
>
> Which is a correct result. But, when running it with petscmpiexec -n 2
> ./fill_in_matrix, I get
>
> Mat Object: 2 MPI process
>>   type: mpiaij
>> row 0: (0, 2.)
>> row 1: (1, 4.)
>> row 2: (2, 6.)
>> row 3: (3, 8.)
>> row 4: (4, 10.)
>
>
> The matrix entries are multiplied by 2, that is, the number of processes
> used to execute the code.
>
> *MatSetValues approach*
>
> I obtain the same behavior when filling in the matrix by using MatSetValues
>
> static char help[] = "Fill in a parallel COO format sparse matrix.";
>
> // fill_in_matrix.cc
>
> #include 
> #include 
>
> int main(int argc, char **args)
> {
> std::vector coo_i{0, 0, 1, 2, 3, 4};
> std::vector coo_j{0, 0, 1, 2, 3, 4};
> std::vector coo_v{2, -1, 2, 3, 4, 5};
>
> Mat A;
> PetscInt size = 5;
>
> PetscCall(PetscInitialize(, , NULL, help));
>
> // Create matrix
> PetscCall(MatCreate(PETSC_COMM_WORLD, ));
> PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size));
> PetscCall(MatSetFromOptions(A));
>
> // Populate matrix
> for (size_t i = 0; i < coo_v.size(); i++)
> PetscCall(MatSetValues(A, 1, _i.at(i), 1, _j.at(i), &
> coo_v.at(i), ADD_VALUES));
>
> PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY));
> PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY));
>
> // View matrix
> PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD));
>
> PetscCall(MatDestroy());
>
> PetscCall(PetscFinalize());
> return 0;
> }
>
> When solving a linear system, I get the correct answer no matter the number
> of MPI processes when using MatSetValues approach.
> The same is not true when using COO approach, whose result is only correct
> when using 1 MPI process.
>
> static char help[] = "Fill in a parallel COO format sparse matrix and solve
> a linear system.";
>
> #include 
> #include 
>
> int main(int argc, char **args)
> {
> std::vector coo_i{0, 0, 1, 2, 3, 4};
> std::vector coo_j{0, 0, 1, 2, 3, 4};
> std::vector coo_v{2, -1, 2, 3, 4, 5};
>
> Mat A;
> Vec B, X, U;
> KSP ksp;
> PC pc;
> PetscReal norm; // norm of solution error
> PetscInt its;
>
> PetscInt size = 5;
>
> PetscCall(PetscInitialize(, , NULL, help));
>
> // Create matrix
>
> PetscCall(MatCreate(PETSC_COMM_WORLD, ));
> PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size));
> PetscCall(MatSetFromOptions(A));
>
>
> // Populate matrix
>
> // COO
> PetscCall(MatSetPreallocationCOO(A, coo_v.size(), coo_i.data(),
> coo_j.data()));
> PetscCall(MatSetValuesCOO(A, coo_v.data(), ADD_VALUES));
>
> // MatSetValues for-loop
> // for (size_t i = 0; i < coo_v.size(); i++)
> // PetscCall(MatSetValues(A, 1, _i.at(i), 1, _j.at(i), &
> coo_v.at(i), ADD_VALUES));
>
> // PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY));
> // PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY));
>
> // View matrix
> PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD));
>
> // Create vector B
> 

Re: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix?

2023-06-20 Thread Matthew Knepley
On Tue, Jun 20, 2023 at 1:43 PM Diego Magela Lemos 
wrote:

> Using all recommended approaches it worked!
> Thank you all in advance.
>
> Now, I'm facing problems when solving a linear system using each approach.
>
> *COO approach*
>

I can answer this one.


> Using MatSetPreallocationCOO and then MatSetValuesCOO, I'm able to fill
> in the matrix when running with 1 MPI process.
> But, if I run with more than one MPI process, the values entries are
> multiplied by the number of MPI processes being used.
> Is this behavior correct?
>
> Consider the following code:
>
> // fill_in_matrix.cc
>
> static char help[] = "Fill in a parallel COO format sparse matrix.";
>
> #include 
> #include 
>
> int main(int argc, char **args)
> {
> std::vector coo_i{0, 0, 1, 2, 3, 4};
> std::vector coo_j{0, 0, 1, 2, 3, 4};
> std::vector coo_v{2, -1, 2, 3, 4, 5};
>
> Mat A;
>
> PetscInt size = 5;
>
> PetscCall(PetscInitialize(, , NULL, help));
>
> // Create matrix
> PetscCall(MatCreate(PETSC_COMM_WORLD, ));
> PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size));
> PetscCall(MatSetFromOptions(A));
>
> // Populate matrix
> PetscCall(MatSetPreallocationCOO(A, coo_v.size(), coo_i.data(),
> coo_j.data()));
> PetscCall(MatSetValuesCOO(A, coo_v.data(), ADD_VALUES));
>
> // View matrix
> PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD));
>
> PetscCall(MatDestroy());
>
> PetscCall(PetscFinalize());
> return 0;
> }
>
> When running with petscmpiexec -n 1 ./fill_in_matrix, I got
>
> Mat Object: 1 MPI process
>>   type: seqaij
>> row 0: (0, 1.)
>> row 1: (1, 2.)
>> row 2: (2, 3.)
>> row 3: (3, 4.)
>> row 4: (4, 5.)
>
>
> Which is a correct result. But, when running it with petscmpiexec -n 2
> ./fill_in_matrix, I get
>
> Mat Object: 2 MPI process
>>   type: mpiaij
>> row 0: (0, 2.)
>> row 1: (1, 4.)
>> row 2: (2, 6.)
>> row 3: (3, 8.)
>> row 4: (4, 10.)
>
>
> The matrix entries are multiplied by 2, that is, the number of processes
> used to execute the code.
>

No. This was mostly intended for GPUs, where there is 1 process. If you
want to use multiple MPI processes, then each process can only introduce
some disjoint subset of the values. This is also how MatSetValues() works,
but it might not be as obvious.

  Thanks,

 Matt


> *MatSetValues approach*
>
> I obtain the same behavior when filling in the matrix by using MatSetValues
>
> static char help[] = "Fill in a parallel COO format sparse matrix.";
>
> // fill_in_matrix.cc
>
> #include 
> #include 
>
> int main(int argc, char **args)
> {
> std::vector coo_i{0, 0, 1, 2, 3, 4};
> std::vector coo_j{0, 0, 1, 2, 3, 4};
> std::vector coo_v{2, -1, 2, 3, 4, 5};
>
> Mat A;
> PetscInt size = 5;
>
> PetscCall(PetscInitialize(, , NULL, help));
>
> // Create matrix
> PetscCall(MatCreate(PETSC_COMM_WORLD, ));
> PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size));
> PetscCall(MatSetFromOptions(A));
>
> // Populate matrix
> for (size_t i = 0; i < coo_v.size(); i++)
> PetscCall(MatSetValues(A, 1, _i.at(i), 1, _j.at(i), &
> coo_v.at(i), ADD_VALUES));
>
> PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY));
> PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY));
>
> // View matrix
> PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD));
>
> PetscCall(MatDestroy());
>
> PetscCall(PetscFinalize());
> return 0;
> }
>
> When solving a linear system, I get the correct answer no matter the
> number of MPI processes when using MatSetValues approach.
> The same is not true when using COO approach, whose result is only
> correct when using 1 MPI process.
>
> static char help[] = "Fill in a parallel COO format sparse matrix and
> solve a linear system.";
>
> #include 
> #include 
>
> int main(int argc, char **args)
> {
> std::vector coo_i{0, 0, 1, 2, 3, 4};
> std::vector coo_j{0, 0, 1, 2, 3, 4};
> std::vector coo_v{2, -1, 2, 3, 4, 5};
>
> Mat A;
> Vec B, X, U;
> KSP ksp;
> PC pc;
> PetscReal norm; // norm of solution error
> PetscInt its;
>
> PetscInt size = 5;
>
> PetscCall(PetscInitialize(, , NULL, help));
>
> // Create matrix
>
> PetscCall(MatCreate(PETSC_COMM_WORLD, ));
> PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size));
> PetscCall(MatSetFromOptions(A));
>
>
> // Populate matrix
>
> // COO
> PetscCall(MatSetPreallocationCOO(A, coo_v.size(), coo_i.data(),
> coo_j.data()));
> PetscCall(MatSetValuesCOO(A, coo_v.data(), ADD_VALUES));
>
> // MatSetValues for-loop
> // for (size_t i = 0; i < coo_v.size(); i++)
> // PetscCall(MatSetValues(A, 1, _i.at(i), 1, _j.at(i), &
> coo_v.at(i), ADD_VALUES));
>
> // PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY));
> // PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY));
>
> // View matrix
> PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD));
>
> // Create vector B
> 

Re: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix?

2023-06-20 Thread Diego Magela Lemos via petsc-users
Using all recommended approaches it worked!
Thank you all in advance.

Now, I'm facing problems when solving a linear system using each approach.

*COO approach*

Using MatSetPreallocationCOO and then MatSetValuesCOO, I'm able to fill in
the matrix when running with 1 MPI process.
But, if I run with more than one MPI process, the values entries are
multiplied by the number of MPI processes being used.
Is this behavior correct?

Consider the following code:

// fill_in_matrix.cc

static char help[] = "Fill in a parallel COO format sparse matrix.";

#include 
#include 

int main(int argc, char **args)
{
std::vector coo_i{0, 0, 1, 2, 3, 4};
std::vector coo_j{0, 0, 1, 2, 3, 4};
std::vector coo_v{2, -1, 2, 3, 4, 5};

Mat A;

PetscInt size = 5;

PetscCall(PetscInitialize(, , NULL, help));

// Create matrix
PetscCall(MatCreate(PETSC_COMM_WORLD, ));
PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size));
PetscCall(MatSetFromOptions(A));

// Populate matrix
PetscCall(MatSetPreallocationCOO(A, coo_v.size(), coo_i.data(),
coo_j.data()));
PetscCall(MatSetValuesCOO(A, coo_v.data(), ADD_VALUES));

// View matrix
PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD));

PetscCall(MatDestroy());

PetscCall(PetscFinalize());
return 0;
}

When running with petscmpiexec -n 1 ./fill_in_matrix, I got

Mat Object: 1 MPI process
>   type: seqaij
> row 0: (0, 1.)
> row 1: (1, 2.)
> row 2: (2, 3.)
> row 3: (3, 4.)
> row 4: (4, 5.)


Which is a correct result. But, when running it with petscmpiexec -n 2
./fill_in_matrix, I get

Mat Object: 2 MPI process
>   type: mpiaij
> row 0: (0, 2.)
> row 1: (1, 4.)
> row 2: (2, 6.)
> row 3: (3, 8.)
> row 4: (4, 10.)


The matrix entries are multiplied by 2, that is, the number of processes
used to execute the code.

*MatSetValues approach*

I obtain the same behavior when filling in the matrix by using MatSetValues

static char help[] = "Fill in a parallel COO format sparse matrix.";

// fill_in_matrix.cc

#include 
#include 

int main(int argc, char **args)
{
std::vector coo_i{0, 0, 1, 2, 3, 4};
std::vector coo_j{0, 0, 1, 2, 3, 4};
std::vector coo_v{2, -1, 2, 3, 4, 5};

Mat A;
PetscInt size = 5;

PetscCall(PetscInitialize(, , NULL, help));

// Create matrix
PetscCall(MatCreate(PETSC_COMM_WORLD, ));
PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size));
PetscCall(MatSetFromOptions(A));

// Populate matrix
for (size_t i = 0; i < coo_v.size(); i++)
PetscCall(MatSetValues(A, 1, _i.at(i), 1, _j.at(i), &
coo_v.at(i), ADD_VALUES));

PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY));
PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY));

// View matrix
PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD));

PetscCall(MatDestroy());

PetscCall(PetscFinalize());
return 0;
}

When solving a linear system, I get the correct answer no matter the number
of MPI processes when using MatSetValues approach.
The same is not true when using COO approach, whose result is only correct
when using 1 MPI process.

static char help[] = "Fill in a parallel COO format sparse matrix and solve
a linear system.";

#include 
#include 

int main(int argc, char **args)
{
std::vector coo_i{0, 0, 1, 2, 3, 4};
std::vector coo_j{0, 0, 1, 2, 3, 4};
std::vector coo_v{2, -1, 2, 3, 4, 5};

Mat A;
Vec B, X, U;
KSP ksp;
PC pc;
PetscReal norm; // norm of solution error
PetscInt its;

PetscInt size = 5;

PetscCall(PetscInitialize(, , NULL, help));

// Create matrix

PetscCall(MatCreate(PETSC_COMM_WORLD, ));
PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, size, size));
PetscCall(MatSetFromOptions(A));


// Populate matrix

// COO
PetscCall(MatSetPreallocationCOO(A, coo_v.size(), coo_i.data(),
coo_j.data()));
PetscCall(MatSetValuesCOO(A, coo_v.data(), ADD_VALUES));

// MatSetValues for-loop
// for (size_t i = 0; i < coo_v.size(); i++)
// PetscCall(MatSetValues(A, 1, _i.at(i), 1, _j.at(i), &
coo_v.at(i), ADD_VALUES));

// PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY));
// PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY));

// View matrix
PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD));

// Create vector B
PetscCall(VecCreate(PETSC_COMM_WORLD, ));
PetscCall(VecSetSizes(B, PETSC_DECIDE, size));
PetscCall(VecSetFromOptions(B));
PetscCall(VecSetUp(B));

// Populate vector
PetscCall(VecSetValues(B, coo_i.size(), coo_i.data(), coo_v.data(),
ADD_VALUES));
PetscCall(VecAssemblyBegin(B));
PetscCall(VecAssemblyEnd(B));

// View vector
PetscCall(VecView(B, PETSC_VIEWER_STDERR_WORLD));

// Define solution and auxiliary vector
PetscCall(VecDuplicate(B, ));
PetscCall(VecDuplicate(B, ));
PetscCall(VecSet(U, 1.0));

// Create solver
PetscCall(KSPCreate(PETSC_COMM_WORLD, ));
PetscCall(KSPSetOperators(ksp, A, 

Re: [petsc-users] Inquiry about the c++ destructor and PetscFinalize.

2023-06-20 Thread neil liu
Thanks a lot, Constantine. It works pretty well.



On Fri, Jun 16, 2023 at 6:52 PM Constantine Khrulev 
wrote:

> In your code the destructor of DMManage is called at the end of scope,
> i.e. after the PetscFinalize() call.
>
> You should be able to avoid this error by putting "DMManage objDMManage"
> in a code block to limit its scope and ensure that it is destroyed
> before PETSc is finalized:
>
> int main(int argc, char** argv) {
>PetscFunctionBeginUser;
>PetscCall(PetscInitialize(, , NULL, help));
>
>{
>  DMManage objDMManage;
> } // objDMManage is destroyed here
>
>PetscFinalize();
>return 0;
> }
>
> On 6/16/23 14:13, neil liu wrote:
> > Dear Petsc developers,
> >
> > I am trying to use Petsc with C++. And came across one issue.
> > Class DMManage has been defined, one default constructor and
> > destructor has been defined there.
> > The code has a runtime error, "double free or corruption". Finally I
> > found that, this is due to PetscFinalize. If I called explicitly the
> > destructor before this PetscFinalze, the error will disappear.
> >
> > Does that mean PetscFinalize do some work to destroy DM?
> >
> > Thanks,
> >
> > #include 
> > #include 
> > #include 
> > #include 
> >
> > class DMManage{
> >   PetscSF distributionSF;
> > public:
> >   DM dm;
> >   DMManage();
> >   ~DMManage();
> > };
> >
> > DMManage::DMManage(){
> >   const char filename[] = "ParallelWaveguide.msh";
> >   DM dmDist;
> >   PetscViewer viewer;
> >   PetscViewerCreate(PETSC_COMM_WORLD, );
> >   PetscViewerSetType(viewer, PETSCVIEWERASCII);
> >   PetscViewerFileSetMode(viewer, FILE_MODE_READ);
> >   PetscViewerFileSetName(viewer, filename);
> >   DMPlexCreateGmsh(PETSC_COMM_WORLD, viewer, PETSC_TRUE, );
> >   PetscViewerDestroy();
> >   PetscInt overlap = 0;
> >   DMPlexDistribute(dm, overlap, , );
> >   std::cout<<< >   if (dmDist) {
> > DMDestroy();
> > dm = dmDist;
> >   }
> >   DMDestroy();
> > }
> >
> > DMManage::~DMManage(){
> >   DMDestroy();
> > }
> >
> > int main(int argc, char** argv) {
> >   PetscFunctionBeginUser;
> >   PetscCall(PetscInitialize(, , NULL, help));
> >
> >   DMManage objDMManage;
> >
> >   PetscFinalize();
> >   return 0;
> > }
>
> --
> Constantine
>
>


Re: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix?

2023-06-20 Thread Barry Smith

  Since you have 6 entries that needed to be added to the matrix you will need 
to call MatSetValues() six time for the six entries.

> On Jun 20, 2023, at 11:06 AM, Matthew Knepley  wrote:
> 
> On Tue, Jun 20, 2023 at 10:55 AM Diego Magela Lemos via petsc-users 
> mailto:petsc-users@mcs.anl.gov>> wrote:
>> Considering, for instance, the following COO sparse matrix format, with 
>> repeated indices:
>> 
>> std::vector rows{0, 0, 1, 2, 3, 4};
>> std::vector cols{0, 0, 1, 2, 3, 4};
>> std::vector values{2, -1, 2, 3, 4, 5};
>> 
>> that represents a 5x5 diagonal matrix A.
>> 
>> So far, the code that I have is:
>> 
>> // fill_in_matrix.cc
>> 
>> static char help[] = "Fill in a parallel COO format sparse matrix.";
>> 
>> #include 
>> #include 
>> 
>> int main(int argc, char **args)
>> {
>> Mat A;
>> PetscInt m = 5, i, Istart, Iend;
>> 
>> PetscCall(PetscInitialize(, , NULL, help));
>> 
>> PetscCall(MatCreate(PETSC_COMM_WORLD, ));
>> PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, m, m));
>> PetscCall(MatSetFromOptions(A));
>> PetscCall(MatSetUp(A));
>> PetscCall(MatGetOwnershipRange(A, , ));
>> 
>> std::vector II{0, 0, 1, 2, 3, 4};
>> std::vector JJ{0, 0, 1, 2, 3, 4};
>> std::vector XX{2, -1, 2, 3, 4, 5};
>> 
>> for (i = Istart; i < Iend; i++)
>> PetscCall(MatSetValues(A, 1, (i), 1, (i), (i), 
>> ADD_VALUES));
>> 
>> PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY));
>> PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY));
>> PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD));
>> 
>> PetscCall(MatDestroy());
>> PetscCall(PetscFinalize());
>> return 0;
>> }
>> When running it with
>> 
>> petscmpiexec -n 4 ./fill_in_matrix
>> 
>> I get
>> 
>>  Mat Object: 4 MPI processes
>>   type: mpiaij
>> row 0: (0, 1.) 
>> row 1: (1, 2.) 
>> row 2: (2, 3.) 
>> row 3: (3, 4.) 
>> row 4:
>> 
>> Which is missing the entry of the last row.
>> 
>> What am I missing? Even better, which would be the best way to fill in this 
>> matrix?
> We have a new interface for this:
> 
>   https://petsc.org/main/manualpages/Mat/MatSetValuesCOO/
> 
>   Thanks,
> 
>  Matt
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ 



Re: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix?

2023-06-20 Thread Stefano Zampini
The loop should iterate on the number of entries of the array, not the
number of local rows

On Tue, Jun 20, 2023, 17:07 Matthew Knepley  wrote:

> On Tue, Jun 20, 2023 at 10:55 AM Diego Magela Lemos via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Considering, for instance, the following COO sparse matrix format, with
>> repeated indices:
>>
>> std::vector rows{0, 0, 1, 2, 3, 4};
>> std::vector cols{0, 0, 1, 2, 3, 4};
>> std::vector values{2, -1, 2, 3, 4, 5};
>>
>> that represents a 5x5 diagonal matrix A.
>>
>> So far, the code that I have is:
>>
>> // fill_in_matrix.cc
>> static char help[] = "Fill in a parallel COO format sparse matrix.";
>> #include #include 
>> int main(int argc, char **args){
>> Mat A;
>> PetscInt m = 5, i, Istart, Iend;
>>
>> PetscCall(PetscInitialize(, , NULL, help));
>>
>> PetscCall(MatCreate(PETSC_COMM_WORLD, ));
>> PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, m, m));
>> PetscCall(MatSetFromOptions(A));
>> PetscCall(MatSetUp(A));
>> PetscCall(MatGetOwnershipRange(A, , ));
>>
>> std::vector II{0, 0, 1, 2, 3, 4};
>> std::vector JJ{0, 0, 1, 2, 3, 4};
>> std::vector XX{2, -1, 2, 3, 4, 5};
>>
>> for (i = Istart; i < Iend; i++)
>> PetscCall(MatSetValues(A, 1, (i), 1, (i), (i), 
>> ADD_VALUES));
>>
>> PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY));
>> PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY));
>> PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD));
>>
>> PetscCall(MatDestroy());
>> PetscCall(PetscFinalize());
>> return 0;
>> }
>>
>> When running it with
>>
>> petscmpiexec -n 4 ./fill_in_matrix
>>
>>
>> I get
>>
>>
>>  Mat Object: 4 MPI processes
>>
>>   type: mpiaij
>> row 0: (0, 1.)
>> row 1: (1, 2.)
>> row 2: (2, 3.)
>> row 3: (3, 4.)
>> row 4:
>>
>>
>> Which is missing the entry of the last row.
>>
>> What am I missing? Even better, which would be the best way to fill in this 
>> matrix?
>>
>> We have a new interface for this:
>
>   https://petsc.org/main/manualpages/Mat/MatSetValuesCOO/
>
>   Thanks,
>
>  Matt
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


Re: [petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix?

2023-06-20 Thread Matthew Knepley
On Tue, Jun 20, 2023 at 10:55 AM Diego Magela Lemos via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Considering, for instance, the following COO sparse matrix format, with
> repeated indices:
>
> std::vector rows{0, 0, 1, 2, 3, 4};
> std::vector cols{0, 0, 1, 2, 3, 4};
> std::vector values{2, -1, 2, 3, 4, 5};
>
> that represents a 5x5 diagonal matrix A.
>
> So far, the code that I have is:
>
> // fill_in_matrix.cc
> static char help[] = "Fill in a parallel COO format sparse matrix.";
> #include #include 
> int main(int argc, char **args){
> Mat A;
> PetscInt m = 5, i, Istart, Iend;
>
> PetscCall(PetscInitialize(, , NULL, help));
>
> PetscCall(MatCreate(PETSC_COMM_WORLD, ));
> PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, m, m));
> PetscCall(MatSetFromOptions(A));
> PetscCall(MatSetUp(A));
> PetscCall(MatGetOwnershipRange(A, , ));
>
> std::vector II{0, 0, 1, 2, 3, 4};
> std::vector JJ{0, 0, 1, 2, 3, 4};
> std::vector XX{2, -1, 2, 3, 4, 5};
>
> for (i = Istart; i < Iend; i++)
> PetscCall(MatSetValues(A, 1, (i), 1, (i), (i), 
> ADD_VALUES));
>
> PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY));
> PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY));
> PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD));
>
> PetscCall(MatDestroy());
> PetscCall(PetscFinalize());
> return 0;
> }
>
> When running it with
>
> petscmpiexec -n 4 ./fill_in_matrix
>
>
> I get
>
>
>  Mat Object: 4 MPI processes
>
>   type: mpiaij
> row 0: (0, 1.)
> row 1: (1, 2.)
> row 2: (2, 3.)
> row 3: (3, 4.)
> row 4:
>
>
> Which is missing the entry of the last row.
>
> What am I missing? Even better, which would be the best way to fill in this 
> matrix?
>
> We have a new interface for this:

  https://petsc.org/main/manualpages/Mat/MatSetValuesCOO/

  Thanks,

 Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


[petsc-users] 2023 PETSc Annual Meeting slides and videos

2023-06-20 Thread Isaac, Toby via petsc-users

Two weeks ago we held the 2023 PETSc Annual Meeting on the campus of the 
Illinois Institute of Technology.  The meeting was a lot of fun: we had over 30 
speakers from industry, academia, and national laboratories present on their 
research and experiences with PETSc, including several minitutorials on 
recently added features of the library.

You can now find slides and videos from most of these presentations on the 
meeting's permanent site, .

Thanks to all who contributed and attended!  We will hold the annual meeting 
May 23 & 24, 2024 in Cologne, Germany.  Please let us know if you have any 
suggestions for this upcoming meeting.  We hope to see you there!

On behalf of the organizing committee,
Toby Isaac


[petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix?

2023-06-20 Thread Diego Magela Lemos via petsc-users
Considering, for instance, the following COO sparse matrix format, with
repeated indices:

std::vector rows{0, 0, 1, 2, 3, 4};
std::vector cols{0, 0, 1, 2, 3, 4};
std::vector values{2, -1, 2, 3, 4, 5};

that represents a 5x5 diagonal matrix A.

So far, the code that I have is:

// fill_in_matrix.cc
static char help[] = "Fill in a parallel COO format sparse matrix.";
#include #include 
int main(int argc, char **args){
Mat A;
PetscInt m = 5, i, Istart, Iend;

PetscCall(PetscInitialize(, , NULL, help));

PetscCall(MatCreate(PETSC_COMM_WORLD, ));
PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, m, m));
PetscCall(MatSetFromOptions(A));
PetscCall(MatSetUp(A));
PetscCall(MatGetOwnershipRange(A, , ));

std::vector II{0, 0, 1, 2, 3, 4};
std::vector JJ{0, 0, 1, 2, 3, 4};
std::vector XX{2, -1, 2, 3, 4, 5};

for (i = Istart; i < Iend; i++)
PetscCall(MatSetValues(A, 1, (i), 1, (i),
(i), ADD_VALUES));

PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY));
PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY));
PetscCall(MatView(A, PETSC_VIEWER_STDERR_WORLD));

PetscCall(MatDestroy());
PetscCall(PetscFinalize());
return 0;
}

When running it with

petscmpiexec -n 4 ./fill_in_matrix


I get


 Mat Object: 4 MPI processes

  type: mpiaij
row 0: (0, 1.)
row 1: (1, 2.)
row 2: (2, 3.)
row 3: (3, 4.)
row 4:


Which is missing the entry of the last row.

What am I missing? Even better, which would be the best way to fill in
this matrix?