Gabriel,

1. Regarding the initialization of PETSc::MPI::BlockSparseMatrix:

I have used the the IndexSet::split_by_block() function and this indeed works good. Thanks for the suggestion!

Unfortunately, I have encountered another issue. The PETSc::MPI::BlockSparseMatrix must be partitioned into contiguous chunks, for that reason renumbering the DoFs by component in order to define blocks for displacement DoFs and electric potentials DoFs fails, giving an error: "PETSc only supports contiguous row/column ranges". I know that Trilinos allows for non-contiguous partitioning (according to what is written in step-32), but I need PETSc to run the eigenvalue problem using SLEPc. Do you have any ideas how this issue could handled?

We are all using PETSc all the time, so that can't be true.

In the end, you need a block matrix each block of which is partitioned between processors. There are numerous tutorial programs that do exactly this. Why don't you copy the way step-32, for example, enumerates and partitions degrees of freedom? There, all that is done is to call

    stokes_dof_handler.distribute_dofs(stokes_fe);

    std::vector<unsigned int> stokes_sub_blocks(dim + 1, 0);
    stokes_sub_blocks[dim] = 1;
    DoFRenumbering::component_wise(stokes_dof_handler, stokes_sub_blocks);

and that is good enough to have things partitioned into blocks, and have everything within each block be contiguous.


2. Clarification to locally_owned_dofs_per_component(), which I mentioned in my previous post:

I created a small working example to demontrate what I described in the previous post (see attached). I run the file using "mpirun -np 2 test_dofs" in debug mode. Please check the attached image for the output.

This shows that the function locally_owned_dofs_per_component() divides the DoFs correctly per component but not per processor. According to what is written in the documentation, the union of locally owned DoFs for all components should correspond to locally owned DoFs, which is not the case in this example.

The problem here is that you are using a sequential triangulation. On every processor, the DoFHandler owns all DoFs and that's what you see when you output the DoFs per component: every dof is owned by every process, because they simply don't know about each other. (The fact that you partition, i.e. set subdomain_ids, on the triangulation doesn't matter: You are using the sequential triangulation class, it knows nothing of other processors.)

On the other hand, you call DoFTools::locally_owned_dofs_per_subdomain which simply counts DoFs per subdomain -- but for a sequential triangulation, the subdomain_id of every cell is just a number without any special meaning.

Best
 W.

--
------------------------------------------------------------------------
Wolfgang Bangerth          email:                 bange...@colostate.edu
                           www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/be5268d2-664b-fe10-23fe-dabb6b55c345%40colostate.edu.

Reply via email to