Re: [deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-21 Thread Feimi Yu
Got it. Thank you so much! Thanks, Feimi On Wednesday, March 21, 2018 at 10:51:24 AM UTC-4, Wolfgang Bangerth wrote: > > On 03/18/2018 04:41 PM, Feimi Yu wrote: > > Please ignore my last post. I made a mistake there. > > Attached is the revised version to better illustrate the problem. > >

Re: [deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-21 Thread Wolfgang Bangerth
On 03/18/2018 04:41 PM, Feimi Yu wrote: Please ignore my last post. I made a mistake there. Attached is the revised version to better illustrate the problem. Patch is now here: https://github.com/dealii/dealii/pull/6087 Best W. --

Re: [deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-20 Thread Wolfgang Bangerth
On 03/18/2018 04:41 PM, Feimi Yu wrote: Please ignore my last post. I made a mistake there. Attached is the revised version to better illustrate the problem. Great, much appreciated -- a most excellent testcase! I can reproduce the problem and have something that may work. Will finish this

Re: [deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-18 Thread Feimi Yu
Please ignore my last post. I made a mistake there. Attached is the revised version to better illustrate the problem. Thanks, Feimi On Saturday, March 10, 2018 at 4:48:23 AM UTC-5, Wolfgang Bangerth wrote: > > On 03/08/2018 02:55 PM, Feimi Yu wrote: > > > > The problem is that I still

Re: [deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-18 Thread Feimi Yu
I'm sorry for the late reply. Here is my small testcase: I just add one line to call the end iterator of the last local row in the built-in test case *reinit_preconditioner_01.cc*under tests/petsc folder: auto itr = mat.end(mat.local_range().second); It produces the same error as I mentioned

Re: [deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-10 Thread Wolfgang Bangerth
On 03/08/2018 02:55 PM, Feimi Yu wrote: The problem is that I still encounter the "out of range" problem even when I do iterator over the local rows. I debugged my code and checked the source code, and found where the problem is: The end iterator of each row is pointing to the first entry of

Re: [deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-08 Thread Feimi Yu
Hi Wolfgang, Fortunately, I managed to solve that problem. I found that every single operation on the iterator needs to call row_length(), which requires the assembled status of the matrix, and apparently set() operation will break this status. My solution is to iterate and cache the rows,

Re: [deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-07 Thread Wolfgang Bangerth
This time I used VectorOperation::insert and it didn't happen the memory error that I posted before. OK, so that then clearly helps :-) However, if I put the compress function after the loop, it only sets one entry on each rank then throws the exception "Object is in wrong state, not

Re: [deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-06 Thread Feimi Yu
Hi Wolfgang, This time I used VectorOperation::insert and it didn't happen the memory error that I posted before. However, if I put the compress function after the loop, it only sets one entry on each rank then throws the exception "Object is in wrong state, not for unassembled matrix."

Re: [deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-06 Thread Wolfgang Bangerth
On 03/05/2018 02:54 PM, Feimi Yu wrote: I changed my strategy to use set(r, c, v) function to set the values so that I can use the const iterators. also called compress after every add: for(autor =Abs_A_matrix->block(0, 0).local_range().first; r block(0, 0).local_range().second; ++r) {

[deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-05 Thread Feimi Yu
Second update (sorry for so many updates) I changed my strategy to use set(r, c, v) function to set the values so that I can use the const iterators. also called compress after every add: for (auto r = Abs_A_matrix->block(0, 0).local_range().first; r < Abs_A_matrix->block(0,

[deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-05 Thread Feimi Yu
An update: I tried to use a iteration below to iterate over local entries: (The reason I use local_range() for only (0, 0) block and iterator for the entire block matrix is because I only need the block(0, 0), and sparse matrix class does not have a non-const iterator, I have to call the local