Please ignore my last post. I made a mistake there.
Attached is the revised version to better illustrate the problem.

Thanks,
Feimi

On Saturday, March 10, 2018 at 4:48:23 AM UTC-5, Wolfgang Bangerth wrote:
>
> On 03/08/2018 02:55 PM, Feimi Yu wrote: 
> > 
> > The problem is that I still encounter the "out of range" problem even 
> when I 
> > do iterator over the local rows. I debugged my code and 
> > checked the source code, and found where the problem is: 
> > The end iterator of each row is pointing to the first entry of the next 
> row 
> > (or true end if the row is the end row). 
> > However when we have multiple ranks, when one wants to get the end 
> iterator of 
> > the last LOCAL row, the end 
> > iterator will point to the next row, which is not belong to local. 
> > Let me take an example to be clear: 
> > I have 1223 rows, where rank 0 has the information of row 0 to 633 and 
> rank 1 
> > has 634 to 1222. 
> > When I iterate over the entries on rank 1, everything works as it is. 
> > When I iterate over the entries on rank 0, especially when it reaches 
> row 633, 
> > I will call matrix.end(633) 
> > However this end is essentially matrix.begin(634)! So PETSc tells me I 
> am out 
> > of range, because rank 0 
> > does not have information of row 634. 
>
> Ah, yes, I see how this can happen. 
>
> Do you think you could come up with a small testcase that shows this? This 
> would make it much easier to debug the problem. In essence, all you'd have 
> to 
> do is create a matrix on two processors, put a few values into it (say, 
> one 
> value per row) and create the iterators. The whole thing doesn't have to 
> do 
> anything useful -- no need to create a mesh, assemble a linear system 
> using 
> FEValues, etc --, it only has to demonstrate the exception you get. 
>
> Best 
>   W. 
>
> -- 
> ------------------------------------------------------------------------ 
> Wolfgang Bangerth          email:                 bang...@colostate.edu 
> <javascript:> 
>                             www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
// ---------------------------------------------------------------------
//
// Copyright (C) 2016 - 2017 by the deal.II authors
//
// This file is part of the deal.II library.
//
// The deal.II library is free software; you can use it, redistribute
// it, and/or modify it under the terms of the GNU Lesser General
// Public License as published by the Free Software Foundation; either
// version 2.1 of the License, or (at your option) any later version.
// The full text of the license can be found in the file LICENSE at
// the top level of the deal.II distribution.
//
// ---------------------------------------------------------------------


// check re-initializing a preconditioner (parallel version)

#include "../tests.h"
#include <deal.II/lac/dynamic_sparsity_pattern.h>
#include <deal.II/lac/petsc_parallel_sparse_matrix.h>
#include <deal.II/lac/petsc_precondition.h>
#include <deal.II/base/index_set.h>
#include <iostream>
#include <vector>

template <class PRE>
void test ()
{
  unsigned int myid = Utilities::MPI::this_mpi_process (MPI_COMM_WORLD);
  unsigned int numproc = Utilities::MPI::n_mpi_processes (MPI_COMM_WORLD);

  deallog << numproc << std::endl;

  // each processor owns 2 indices and all
  // are ghosting Element 1 (the second)

  IndexSet local_active(numproc*2);
  local_active.add_range(myid*2,myid*2+2);
  IndexSet local_relevant= local_active;
  local_relevant.add_range(0,1);

  DynamicSparsityPattern csp (local_relevant);

  for (unsigned int i=0; i<2*numproc; ++i)
    if (local_relevant.is_element(i))
      csp.add(i,i);

  csp.add(0,1);


  PETScWrappers::MPI::SparseMatrix mat;
  mat.reinit (local_active, local_active, csp, MPI_COMM_WORLD);

  Assert(mat.n()==numproc*2, ExcInternalError());
  Assert(mat.m()==numproc*2, ExcInternalError());

  // set local values
  mat.set(myid*2,myid*2, 1.0+myid*2.0);
  mat.set(myid*2+1,myid*2+1, 2.0+myid*2.0);

  mat.compress(VectorOperation::insert);

  {
    PETScWrappers::MPI::Vector src, dst;
    src.reinit(local_active, MPI_COMM_WORLD);
    dst.reinit(local_active, MPI_COMM_WORLD);
    src(myid*2) = 1.0;
    src.compress(VectorOperation::insert);

    PRE pre;
    pre.initialize(mat);
    pre.vmult(dst, src);
    dst.print(deallog.get_file_stream());

    mat.add(0,0,1.0);
    mat.compress(VectorOperation::add);

    pre.initialize(mat);
    pre.vmult(dst, src);
    dst.print(deallog.get_file_stream());
  }

  //////////////////////////////////////////////
  /////This is a test for the local matrix iterator
  //////////////////////////////////////////////
  unsigned int start_row = mat.local_range().first;
  unsigned int end_row = mat.local_range().second;
  for (auto r = start_row; r < end_row; ++r)
  {
    for (auto itr = mat.begin(r); itr != mat.end(r); ++itr)
    {
      //do nothing, just iterate
    }
  }
  /////////////////////////////////////////////
  
  if (myid==0)
    deallog << "OK" << std::endl;
}


int main (int argc, char **argv)
{
  Utilities::MPI::MPI_InitFinalize mpi_initialization (argc, argv, 1);
  MPILogInitAll log;

  test<PETScWrappers::PreconditionJacobi> ();
  test<PETScWrappers::PreconditionBlockJacobi> ();
  test<PETScWrappers::PreconditionEisenstat> ();
  test<PETScWrappers::PreconditionBoomerAMG> ();
  test<PETScWrappers::PreconditionParaSails> ();
  test<PETScWrappers::PreconditionNone> ();
}

Reply via email to