I should mention how I am creating my dynamic sparsity pattern. The
structure is like this:
dsp.reinit(local_number_of_rows, number_of_columns); // the number of rows
is divided among processors
for (int i =0; i < local_number_of_rows; ++i )
for (int j = 0; j < number_of_columns; ++j )
dsp( i, j ) = stuff_calculated
This gives me a local dsp and I thought the IndexSet add_range() above will
put this local dsp to the correct global index set.
On Sunday, December 20, 2020 at 10:53:33 AM UTC-6 Zachary Streeter wrote:
> Hi there,
>
> I am trying to build a parallel PETSc sparse matrix by building a local
> sparsity pattern and subsequently distributing the pattern to all processes
> so I then have the full sparsity pattern for the PETSc sparse matrix (I
> read this needs to be done but please correct me if I am wrong). When I try
> to reinit the PETSc sparse matrix I get a segfault so I am hunting down the
> problem.
>
> My understanding for “distribute_sparsity_pattern” is it sends local
> sparsity patterns to all other processes and saves the full global pattern
> in the dynamic sparsity pattern you passed in. So I figured that after I
> call this function, my local number of rows should be the global number of
> rows but the number of rows is the same as before I called this function
> (i.e. the actual local number of rows) so I think I am not using this
> function correctly and my PETSc sparse matrix doesn’t have the global
> dynamic sparsity pattern I think it must have, which results in a segfault.
>
> NOTE: The matrix is a square matrix with rows and columns of size “nbas *
> nchannels” and the rows are divided amongst the processes.
>
> Here is the code for distributing my locally built sparsity pattern dsp:
>
> IndexSet local_owned( nbas * nchannels * nbas * nchannels ); \\ allocate
> global sized index set
> local_owned.add_range( LocalStart(), LocalEnd() ); \\ start of local row
> first column and end of last local row at last column
> SparsityTools::distribute_sparsity_pattern( dsp, local_owned, comm,
> local_owned );
>
> Here is the code for initializing my PETSc sparse matrix:
>
> std::vector<size_type> local_rows_per_process( num_procs); // allocate a
> vector of length number of process
> std::vector<size_type> local_columns_per_process( num_procs, nbas *
> nchannels); // columns are full length and rows are divided by num_procs
> for(int i=0; i < num_procs; ++i )
> {
> local_rows_per_process[ i ] = i * local_rows; // Saw this in a test but
> initially thought this should just all be local_rows for each i
> }
>
> Matrix.reinit( comm, dsp, local_rows_per_process,
> local_columns_per_process, my_proc);
>
>
>
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/dealii/ab9aff43-3eaf-4a58-861b-f8c600fbffa5n%40googlegroups.com.