Zachary,
I haven't tried to debug your code snippets, but would suggest to take a look
how step-40 sets up and builds the sparsity pattern. There, we call
make_sparsity_pattern(), which really just adds certain entries to the
sparsity pattern like you do, so if you understand how the set up works in
step-40, you should be able to understand what you need to do in your case.
Best
W.
On 12/20/20 10:21 AM, Zachary Streeter wrote:
I should mention how I am creating my dynamic sparsity pattern. The structure
is like this:
dsp.reinit(local_number_of_rows, number_of_columns); // the number of rows is
divided among processors
for (int i =0; i < local_number_of_rows; ++i )
for (int j = 0; j < number_of_columns; ++j )
dsp( i, j ) = stuff_calculated
This gives me a local dsp and I thought the IndexSet add_range() above will
put this local dsp to the correct global index set.
On Sunday, December 20, 2020 at 10:53:33 AM UTC-6 Zachary Streeter wrote:
Hi there,
I am trying to build a parallel PETSc sparse matrix by building a local
sparsity pattern and subsequently distributing the pattern to all
processes so I then have the full sparsity pattern for the PETSc sparse
matrix (I read this needs to be done but please correct me if I am wrong).
When I try to reinit the PETSc sparse matrix I get a segfault so I am
hunting down the problem.
My understanding for “distribute_sparsity_pattern” is it sends local
sparsity patterns to all other processes and saves the full global pattern
in the dynamic sparsity pattern you passed in. So I figured that after I
call this function, my local number of rows should be the global number of
rows but the number of rows is the same as before I called this function
(i.e. the actual local number of rows) so I think I am not using this
function correctly and my PETSc sparse matrix doesn’t have the global
dynamic sparsity pattern I think it must have, which results in a segfault.
NOTE: The matrix is a square matrix with rows and columns of size “nbas *
nchannels” and the rows are divided amongst the processes.
Here is the code for distributing my locally built sparsity pattern dsp:
IndexSet local_owned( nbas * nchannels * nbas * nchannels ); \\ allocate
global sized index set
local_owned.add_range( LocalStart(), LocalEnd() ); \\ start of local row
first column and end of last local row at last column
SparsityTools::distribute_sparsity_pattern( dsp, local_owned, comm,
local_owned );
Here is the code for initializing my PETSc sparse matrix:
std::vector<size_type> local_rows_per_process( num_procs); // allocate a
vector of length number of process
std::vector<size_type> local_columns_per_process( num_procs, nbas *
nchannels); // columns are full length and rows are divided by num_procs
for(int i=0; i < num_procs; ++i )
{
local_rows_per_process[ i ] = i * local_rows; // Saw this in a test but
initially thought this should just all be local_rows for each i
}
Matrix.reinit( comm, dsp, local_rows_per_process,
local_columns_per_process, my_proc);
--
The deal.II project is located at http://www.dealii.org/
<https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.dealii.org%2F&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7C3f4aca6284f048c9636708d8a50bb03b%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637440816909764290%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Rthy%2B4JrqOtcXGmxpJouuevfftVKqtDup2bZxH5Ke%2FE%3D&reserved=0>
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fforum%2Fdealii%3Fhl%3Den&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7C3f4aca6284f048c9636708d8a50bb03b%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637440816909774283%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=eTt84XoOyY6DZlTWJisFr81zSvuJJIHLMMZyxGkLDsU%3D&reserved=0>
---
You received this message because you are subscribed to the Google Groups
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to [email protected]
<mailto:[email protected]>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/dealii/ab9aff43-3eaf-4a58-861b-f8c600fbffa5n%40googlegroups.com
<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fdealii%2Fab9aff43-3eaf-4a58-861b-f8c600fbffa5n%2540googlegroups.com%3Futm_medium%3Demail%26utm_source%3Dfooter&data=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7C3f4aca6284f048c9636708d8a50bb03b%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C0%7C637440816909774283%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=6DYM8bKsuQoNHcDWSK9M%2Fi9VERb2m4DVq%2FMnRN4AEjU%3D&reserved=0>.
--
------------------------------------------------------------------------
Wolfgang Bangerth email: [email protected]
www: http://www.math.colostate.edu/~bangerth/
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/dealii/adbfc088-1bf8-0646-dd7b-86cf495dfd92%40colostate.edu.