My project is in quantum scattering and I would like to have some operators 
be distributed PETSc objects.  So inside my OneBodyHamiltonianOperator 
class (for example), I would like to create a 
PETScWrappers::MPI::SparseMatrix and then use SLEPC to solve for the ground 
state and excited states.  

I tried to add in comments to everything to show my intent.

Here is the header file OneBodyHamiltonianOperator.h
-----------------------------------------------------------------------------------------
namespace quantumScattering {
class OneBodyHamiltonianOperator {
 public:
  /**
   * Declare type for container size.
   */
  using size_type = dealii::types::global_dof_index; // using dealii type 
for dealii types

  OneBodyHamiltonianOperator(const dealii::IndexSet &a_local_row_set,
                             const uint32_t a_my_proc,
                             const uint32_t a_num_procs);

  /// Destructor
  ~OneBodyHamiltonianOperator();

 private:
  dealii::PETScWrappers::MPI::SparseMatrix m_H1;
  dealii::DynamicSparsityPattern m_dynamic_sparsity_pattern; // want to use 
this for reinit for better performance
};

}  // namespace quantumScattering

Here is the OneBodyHamiltonianOperator.cc file 
---------------------------------------------------------------------------------------------------
using namespace quantumScattering;

OneBodyHamiltonianOperator::OneBodyHamiltonianOperator(
    const dealii::IndexSet &a_local_row_set, const uint32_t a_my_proc,
    const uint32_t a_num_procs) {
  dealii::IndexSet local_owned(a_local_row_set.size()); // initialize 
IndexSet with the global row size
  local_owned.add_range(
      *a_local_row_set.begin(),
      *a_local_row_set.begin() + a_local_row_set.n_elements()); // add the 
number of local rows as range

  m_dynamic_sparsity_pattern.reinit(a_local_row_set.size(),
                                    a_local_row_set.size(), local_owned); 
// not used here but is the goal, need to get reinit working first...

  int guess = 50; // arbitrary guess for number of non-zeros

  // have a square matrix of size a_local_row_set.size() X 
a_local_row_set.size()
  // idea is to parallelize on the rows so the local rows should be of size 
a_local_row_set.n_elements()
  m_H1.reinit(MPI_COMM_WORLD, a_local_row_set.size(), 
a_local_row_set.size(),
              a_local_row_set.n_elements(), a_local_row_set.size(), 
guess);  
}

OneBodyHamiltonianOperator::~OneBodyHamiltonianOperator() {}
 

Here is a test::
----------------------------------------------------------------------------------------------------------------------------
#include "OneBodyHamiltonianOperator.h"

void
test(const int &n_proc, const int &my_proc, const ConditionalOStream &pcout)
{
  // arbitrary test variables. All operators are (nbas*nchannels) X 
(nbas*nchannels) and just parallelize the rows
  auto     nbas      = 64;
  auto     nchannels = 2;
  IndexSet local_row_set =
    Utilities::create_evenly_distributed_partitioning(my_proc,
                                                      n_proc,
                                                      nbas * nchannels); // 
set how all operators will be parallelized by rows

  OneBodyHamiltonianOperator H1(local_row_set, my_proc, n_proc);
}

int
main(int argc, char **argv)
{
  Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);

  MPI_Comm   comm(MPI_COMM_WORLD);
  const auto my_proc = Utilities::MPI::this_mpi_process(comm);
  const auto n_proc  = Utilities::MPI::n_mpi_processes(comm);

  ConditionalOStream pcout(std::cout, (my_proc == 0));

  test(n_proc, my_proc, pcout);

  return 0;
}

This gave me the same error and it's fairly bare bones so I hope it is an 
easy fix.  The real case involves building a distributed dynamic sparsity 
matrix and using that to reinit the SparseMatrix.  Can I create a parallel 
sparse matrix using a distributed dynamic sparsity pattern or must I gather 
all elements of the dynamic sparsity pattern locally (that's a separate but 
important question)?

Please let me know if my test case is lacking in clarity etc. and I gravely 
appreciate the guidance! 

Cheers,

Zachary

On Monday, January 4, 2021 at 3:49:40 PM UTC-6 Wolfgang Bangerth wrote:

>
> Zachary,
>
> > I am trying to debug this strange behavior.  I am trying to build a 
> PETSC 
> > sparse parallel matrix using 4 processors.  This gives me 32 local 
> number of 
> > rows (so 128 global number of rows).  But when I pass the 
> local_num_of_rows 
> > variable into the reinit function, this is the PETSC error I get:
> > 
> > PETSC ERROR: Nonconforming object sizes
> > PETSC ERROR: Sum of local lengths 512 does not equal global length 128, 
> my 
> > local length 128
> > 
> > Here is my reinit function and necessary vectors:
> > 
> > std::vector<size_type> local_rows_per_process(num_procs,local_num_rows);
> > std::vector<size_type> 
> local_columns_per_process(num_procs,number_of_columns);
> > 
> > spm.reinit(MPI_COMM_WORLD, dsp, local_rows_per_process, 
> > local_columns_per_process, my_proc);
> > 
> > *The number of local rows for this example is local_num_rows=32, I 
> printed to 
> > check.  Though when it is passed into the reinit function, it looks like 
> it is 
> > using the global number of rows.*
> > 
> > I get the same error from the constructor that doesn’t use the dynamic 
> > sparsity pattern:
> > 
> > 
> spm.reinit(MPI_COMM_WORLD,global_row_size,global_column_size,local_row_size,local_column_size,number_of_non_zeros);
> > Just for clarifying this constructor, what is “local_rows” and 
> “local_columns” 
> > for this constructor?  The documentation just says see the class 
> > documentation.  I see where the 4th constructor uses 
> “local_rows_per_process” 
> > and this means how many rows do all other processors own and same for 
> the 
> > columns so I thought I had that figured out for my constructor with the 
> > dynamic sparsity pattern but maybe not.  For this constructor, I just 
> used the 
> > local number of rows and columns.
> > 
> > Can someone please show me what they would do to debug this situation?
>
> Can you come up with a small, self-contained test case that we can run to 
> see 
> and debug what exactly you are doing?
>
> Best
> WB
>
> -- 
> ------------------------------------------------------------------------
> Wolfgang Bangerth email: [email protected]
> www: http://www.math.colostate.edu/~bangerth/
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/7e75cc1a-cd2d-4fea-af46-b71de122f8ffn%40googlegroups.com.

Reply via email to