W.,

Ah yes you are correct in your thinking.  My test case was every process did 
have the same number of rows but I would need to communicate that with a gather 
to account for the number of rows not divisible by the number of given 
processors.  I will use the 4th reinit function to avoid needing to do this, 
thank you!

Using the same number of local columns as local rows in the reinit functions is 
still puzzling to me, so let me be clear of my desired layout.

I have a matrix that is total_num_rows X total_num_columns.  I would like to 
divide up, by processor, only on the row dimension so I would think the local 
layout would be ( total_num_rows / num_procs ) X total_num_columns, which is 
why I was trying to use size() for the local column number.

Here is my code again with hopefully better comments.  Looking at the loop 
structure for the dynamic_sparsity_pattern should make it clear.  I think the 
3rd argument in the constructor for the dynamic sparsity pattern is wrong if I 
want these dimensions. 

#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/index_set.h>
#include <deal.II/base/mpi.h>

#include <deal.II/lac/dynamic_sparsity_pattern.h>
#include <deal.II/lac/petsc_sparse_matrix.h>

using namespace dealii;

class OneBodyHamiltonianOperator
{
public:
  /**
   * Declare type for container size.
   */
  using size_type = dealii::types::global_dof_index;

  OneBodyHamiltonianOperator(const dealii::IndexSet &a_local_row_set,
                             const uint32_t          a_my_proc,
                             const uint32_t          a_num_procs);

  /// Destructor
  ~OneBodyHamiltonianOperator();

private:
  dealii::PETScWrappers::MPI::SparseMatrix m_H1;
  dealii::DynamicSparsityPattern           m_dynamic_sparsity_pattern;
};

OneBodyHamiltonianOperator::OneBodyHamiltonianOperator(
  const dealii::IndexSet &a_local_row_set,
  const uint32_t          a_my_proc,
  const uint32_t          a_num_procs)
{
  m_dynamic_sparsity_pattern.reinit(
    a_local_row_set.size(),
    a_local_row_set.size(), // sqaure matrix
    a_local_row_set);       // this maybe what's wrong for wanting
                      // a_local_row_set.n_elements() X a_local_row_set.size()

  for (dealii::IndexSet::ElementIterator i = a_local_row_set.begin();
       i !=
       a_local_row_set.end(); // row dimension has a_local_row_set.n_elements()
                              // num of elements
       ++i)
    {
      for (uint32_t j = 0; j < a_local_row_set.size();
           ++j) // column dimension has a_local_row_set.size() dimension
        {
          auto ith = *i;
          m_dynamic_sparsity_pattern.add(ith, j);
        }
    }

  m_H1.reinit(a_local_row_set,
              a_local_row_set, // would think number of local columns is the
                               // full dimension because only partitioning on
                               // row dimension
              m_dynamic_sparsity_pattern,
              MPI_COMM_WORLD);

  for (dealii::IndexSet::ElementIterator i = a_local_row_set.begin();
       i != a_local_row_set.end();
       ++i)
    {
      auto ith = *i;
      for (uint32_t j = 0; j < m_dynamic_sparsity_pattern.row_length(ith); ++j)
        {
          m_H1.add(ith,
                   m_dynamic_sparsity_pattern.column_number(ith, j),
                   1); // added 1 for test value
        }
    }

  m_H1.compress(dealii::VectorOperation::add);
}

OneBodyHamiltonianOperator::~OneBodyHamiltonianOperator()
{}

void
test(const int &n_proc, const int &my_proc, const ConditionalOStream &pcout)
{
  // Have matrices with (nbas * nchannels) X (nbas * nchannels)
  auto     nbas          = 64;
  auto     nchannels     = 2;
  IndexSet local_row_set = Utilities::create_evenly_distributed_partitioning(
    my_proc,
    n_proc,
    nbas * nchannels); // partition on num_rows (i.e. nbas * nchannels)

  OneBodyHamiltonianOperator H1(local_row_set, my_proc, n_proc);
}

int
main(int argc, char **argv)
{
  Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);

  MPI_Comm   comm(MPI_COMM_WORLD);
  const auto my_proc = Utilities::MPI::this_mpi_process(comm);
  const auto n_proc  = Utilities::MPI::n_mpi_processes(comm);

  ConditionalOStream pcout(std::cout, (my_proc == 0));

  test(n_proc, my_proc, pcout);

  return 0;
}


> On Jan 5, 2021, at 11:03 PM, Wolfgang Bangerth <bange...@colostate.edu> wrote:
> 
> On 1/5/21 11:32 AM, Zachary Streeter wrote:
>> Yes, I want to use the constructor with the dynamic sparsity pattern.  So 
>> with your suggestion in mind, would that just be the following:
>> dealii::IndexSet local_owned(a_local_row_set.size());
>>   local_owned.add_range(*a_local_row_set.begin(),
>>                         *a_local_row_set.begin() +
>>                           a_local_row_set.n_elements());
>>   m_dynamic_sparsity_pattern.reinit(a_local_row_set.size(),
>>                                     a_local_row_set.size(),
>>                                     local_owned);
>> std::vector<size_type> local_rows_per_process(num_procs, 
>> a_local_row_set.n_elements() );
> 
> This creates a vector of size num_procs in which all entries are equal to 
> a_local_row_set.n_elements(). But why would all processors have the same 
> number of rows? And are you sure that a_local_row_set.n_elements() is the 
> same number on all processors anyway?
> 
>> std::vector<size_type> local_columns_per_process(num_procs, 
>> a_local_row_set.n_elements() ); // before used a_local_row_set.size() here!
>> m_H1.reinit(MPI_COMM_WORLD, m_dynamic_sparsity_pattern, 
>> local_rows_per_process, local_columns_per_process, my_proc);
> 
> I would try to stick with the fourth reinit() function here:
> https://dealii.org/developer/doxygen/deal.II/classPETScWrappers_1_1MPI_1_1SparseMatrix.html#a645ab9f99494a47ebb6492e92e707130
> It's the simplest to use because every processor only has to pass an IndexSet 
> object that describes which rows (and columns) it owns.
> 
> Best
> W.
> 
> -- 
> ------------------------------------------------------------------------
> Wolfgang Bangerth          email:                 bange...@colostate.edu
>                           www: http://www.math.colostate.edu/~bangerth/
> 
> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> --- You received this message because you are subscribed to a topic in the 
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/dealii/If0Ep9rJ67Q/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to 
> dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/dealii/d5d792d6-3c44-ccf4-903e-6825df57646e%40colostate.edu.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4CB8DE18-364F-40DE-AD31-0A8148F2C591%40gmail.com.

Reply via email to