Dear Franco, 

what type of triangulation are you using? If you look at the code that is run 
with `update_cached_numbers` 

https://github.com/dealii/dealii/blob/9e50f3ac04d088a9d2ffa74a81f08348e7f98a65/source/particles/particle_handler.cc#L173

you will see that the numbers are updated (just as you are doing manually) for 
parallel::distributed::Triangulation objects, and otherwise the triangulation 
is treated as if it was a serial one. 

I suspect you are using a parallel::shared::Triangulation object. 

The ParticleHandler class was built with parallel::shared::Triangulation 
objects in mind, and only recently was made to work with serial triangulations.

You have spotted a bug in the library, in the sense that ParticleHandler should 
throw an exception when the Triangulation is paralell::shared, but not 
parallel::distributed. Since all triangulation are derived from the serial one, 
ParticleHandler thinks your triangulation is serial, and therefore gets 
incosistent numbers across the processors. 

I think there are only a few places that we would need to change to make the 
code run also in your case, but to be on the safe side, I’d switch to 
parallel::distributed::Triangulation (if you can), otherwise I can assist you 
in opening a pull request with the required changes. 

Luca.



> On 20 Jul 2020, at 17:48, franco.m...@gmail.com <franco.milicc...@gmail.com> 
> wrote:
> 
> Dear Luca,
> 
> thanks for your answer, I've moved along but I am still encountering problems 
> in the interpolation. This seems to be a problem with the way I use PETSc.
> 
> So, as far as I understand, I need to initialize a PETSc vector of length 
> n_global_particles * components (in my case, 2, the space dimension), and 
> locally each process should handle n_locally_owned_particles * components. 
> 
> First, just a simple question: is n_global_particles working correctly on 
> processes without any? It seems that it returns zero when no particles are 
> locally owned, but the documentation, as I understand, says it should yield 
> the same value on all processes.
> 
> The code I am using is this:
> 
> void Step3::setup_system()
> {
>     GridTools::partition_triangulation(n_mpi_processes, triangulation);
>     dof_handler.distribute_dofs(fe);
>     DoFRenumbering::subdomain_wise(dof_handler);
>  
>     const std::vector<IndexSet> locally_owned_dofs_per_proc = 
> DoFTools::locally_owned_dofs_per_subdomain(dof_handler);
>     const IndexSet locally_owned_dofs = 
> locally_owned_dofs_per_proc[this_mpi_process];
> 
>     field.reinit(locally_owned_dofs, mpi_communicator);
> 
>     VectorTools::interpolate(dof_handler, initial_function, field);
> 
>     // Particles
> 
>     particle_handler.initialize(triangulation, StaticMappingQ1<2>::mapping);
> 
>     std::vector<Point<2>> particles_to_insert;
> 
>     for (const auto &cell : dof_handler.active_cell_iterators())
>     {
>         if (cell->subdomain_id() == this_mpi_process)
>         {
>             if(cell->center()(1) < 0.03)
>             {
>                 particles_to_insert.emplace_back(cell->center());
>                 std::cout << "MPI_rank: "<< this_mpi_process  << "   
> cell_center: " << cell->center() << std::endl;
>             }
>         }
>     }
>     particle_handler.insert_particles(particles_to_insert);
>     particle_handler.update_cached_numbers();
> 
>     // n_global_particles returns zero when no particles are locally owned: 
> compute the real number
>     int total_particles = 
> Utilities::MPI::sum(particle_handler.n_global_particles(), mpi_communicator);
> 
>     for(int i = 0; i < n_mpi_processes; i++)
>     {
>         if(this_mpi_process == i)
>             std::cout << "MPI_process: " << i
>                       << "     locally_owned_particles: " << 
> particle_handler.n_locally_owned_particles()
>                       << "     total_particles: " << total_particles
>                       << "     n_global_particles: " << 
> particle_handler.n_global_particles()
>                       << std::endl;
>     }
> 
>     field_on_particles.reinit(mpi_communicator, total_particles*2 , 
> particle_handler.n_locally_owned_particles()*2);
>     field_on_particles.compress(VectorOperation::add); // unnecessary?
> 
>     for(int i = 0; i < n_mpi_processes; i++)
>     {
>         if(this_mpi_process == i)
>                 std::cout   << "VECTOR"
>                             << "MPI_process: " << i
>                             << "     vec.size: " << field_on_particles.size()
>                             << "     vec.local_size: " << 
> field_on_particles.local_size()
>                             << std::endl;
>     }
> 
>     Particles::Utilities::interpolate_field_on_particles(dof_handler, 
> particle_handler, field, field_on_particles);
>     
> }
> 
> As you can see, I am following the 
> Particles::Utilities::interpolate_field_on_particles, by initializing a PETSc 
> vector with "the size of the vector must be n_locally_owned_particles times 
> the n_components", the total size I guess it should be the n_global_particles 
> * components. I might be completely wrong, though.
> 
> The output I am getting is this, followed by the MPI abort:
> 
> /usr/bin/mpirun -np 4 cmake-build-debug/step-3
> MPI_rank: 0   cell_center: 0.015625 0.015625
> MPI_rank: 0   cell_center: 0.046875 0.015625
> MPI_rank: 0   cell_center: 0.078125 0.015625
> MPI_rank: 0   cell_center: 0.109375 0.015625
> MPI_rank: 0   cell_center: 0.140625 0.015625
> MPI_rank: 0   cell_center: 0.171875 0.015625
> MPI_rank: 0   cell_center: 0.203125 0.015625
> MPI_rank: 0   cell_center: 0.234375 0.015625
> MPI_rank: 0   cell_center: 0.265625 0.015625
> MPI_rank: 0   cell_center: 0.296875 0.015625
> MPI_rank: 0   cell_center: 0.328125 0.015625
> MPI_rank: 0   cell_center: 0.359375 0.015625
> MPI_rank: 2   cell_center: 0.390625 0.015625
> MPI_rank: 2   cell_center: 0.421875 0.015625
> MPI_rank: 2   cell_center: 0.453125 0.015625
> MPI_rank: 2   cell_center: 0.484375 0.015625
> MPI_rank: 2   cell_center: 0.515625 0.015625
> MPI_rank: 2   cell_center: 0.546875 0.015625
> MPI_rank: 2   cell_center: 0.578125 0.015625
> MPI_rank: 2   cell_center: 0.609375 0.015625
> MPI_rank: 2   cell_center: 0.640625 0.015625
> MPI_rank: 2   cell_center: 0.671875 0.015625
> MPI_rank: 2   cell_center: 0.703125 0.015625
> MPI_rank: 2   cell_center: 0.734375 0.015625
> MPI_rank: 2   cell_center: 0.765625 0.015625
> MPI_rank: 2   cell_center: 0.796875 0.015625
> MPI_rank: 2   cell_center: 0.828125 0.015625
> MPI_rank: 2   cell_center: 0.859375 0.015625
> MPI_rank: 2   cell_center: 0.890625 0.015625
> MPI_rank: 2   cell_center: 0.921875 0.015625
> MPI_rank: 2   cell_center: 0.953125 0.015625
> MPI_rank: 2   cell_center: 0.984375 0.015625
> MPI_process: 1     locally_owned_particles: 0     total_particles: 32     
> n_global_particles: 0
> MPI_process: 0     locally_owned_particles: 12     total_particles: 32     
> n_global_particles: 12
> MPI_process: 3     locally_owned_particles: 0     total_particles: 32     
> n_global_particles: 0
> MPI_process: 2     locally_owned_particles: 20     total_particles: 32     
> n_global_particles: 20
> VECTORMPI_process: 3     vec.size: 64     vec.local_size: 0
> VECTORMPI_process: 1     vec.size: 64     vec.local_size: 0
> VECTORMPI_process: 0     vec.size: 64     vec.local_size: 24
> VECTORMPI_process: 2     vec.size: 64     vec.local_size: 40
> 
> The error is the following:
> 
> --------------------------------------------------------
> An error occurred in line <220> of file 
> </media/murer/murer/Programmi/dealii_9.2.0_MPI_P4est/include/deal.II/particles/utilities.h>
>  in function
>     void dealii::Particles::Utilities::interpolate_field_on_particles(const 
> dealii::DoFHandler<dim, spacedim>&, const 
> dealii::Particles::ParticleHandler<dim, spacedim>&, const InputVectorType&, 
> OutputVectorType&, const dealii::ComponentMask&) [with int dim = 2; int 
> spacedim = 2; InputVectorType = dealii::PETScWrappers::MPI::Vector; 
> OutputVectorType = dealii::PETScWrappers::MPI::Vector]
> The violated condition was:
>     static_cast<typename ::dealii::internal::argument_type<void( typename 
> std::common_type<decltype(interpolated_field.size()), 
> decltype(particle_handler.get_next_free_particle_index() * 
> n_comps)>::type)>::type>(interpolated_field.size()) == static_cast<typename 
> ::dealii::internal::argument_type<void( typename 
> std::common_type<decltype(interpolated_field.size()), 
> decltype(particle_handler.get_next_free_particle_index() * 
> n_comps)>::type)>::type>(particle_handler.get_next_free_particle_index() * 
> n_comps)
> Additional information:
>     Dimension 64 not equal to 24.
> 
> Stacktrace:
> -----------
> #0  cmake-build-debug/step-3: void 
> dealii::Particles::Utilities::interpolate_field_on_particles<2, 2, 
> dealii::PETScWrappers::MPI::Vector, 
> dealii::PETScWrappers::MPI::Vector>(dealii::DoFHandler<2, 2> const&, 
> dealii::Particles::ParticleHandler<2, 2> const&, 
> dealii::PETScWrappers::MPI::Vector const&, 
> dealii::PETScWrappers::MPI::Vector&, dealii::ComponentMask const&)
> #1  cmake-build-debug/step-3: Step3::setup_system()
> #2  cmake-build-debug/step-3: Step3::run()
> #3  cmake-build-debug/step-3: main
> --------------------------------------------------------
> 
> Calling MPI_Abort now.
> To break execution in a GDB session, execute 'break MPI_Abort' before 
> running. You can also put the following into your ~/.gdbinit:
>   set breakpoint pending on
>   break MPI_Abort
>   set breakpoint pending auto
> --------------------------------------------------------------------------
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode 255.
> 
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --------------------------------------------------------------------------
> 
> --------------------------------------------------------
> An error occurred in line <220> of file 
> </media/murer/murer/Programmi/dealii_9.2.0_MPI_P4est/include/deal.II/particles/utilities.h>
>  in function
>     void dealii::Particles::Utilities::interpolate_field_on_particles(const 
> dealii::DoFHandler<dim, spacedim>&, const 
> dealii::Particles::ParticleHandler<dim, spacedim>&, const InputVectorType&, 
> OutputVectorType&, const dealii::ComponentMask&) [with int dim = 2; int 
> spacedim = 2; InputVectorType = dealii::PETScWrappers::MPI::Vector; 
> OutputVectorType = dealii::PETScWrappers::MPI::Vector]
> The violated condition was:
>     static_cast<typename ::dealii::internal::argument_type<void( typename 
> std::common_type<decltype(interpolated_field.size()), 
> decltype(particle_handler.get_next_free_particle_index() * 
> n_comps)>::type)>::type>(interpolated_field.size()) == static_cast<typename 
> ::dealii::internal::argument_type<void( typename 
> std::common_type<decltype(interpolated_field.size()), 
> decltype(particle_handler.get_next_free_particle_index() * 
> n_comps)>::type)>::type>(particle_handler.get_next_free_particle_index() * 
> n_comps)
> Additional information:
>     Dimension 64 not equal to 40.
> 
> Stacktrace:
> -----------
> #0  cmake-build-debug/step-3: void 
> dealii::Particles::Utilities::interpolate_field_on_particles<2, 2, 
> dealii::PETScWrappers::MPI::Vector, 
> dealii::PETScWrappers::MPI::Vector>(dealii::DoFHandler<2, 2> const&, 
> dealii::Particles::ParticleHandler<2, 2> const&, 
> dealii::PETScWrappers::MPI::Vector const&, 
> dealii::PETScWrappers::MPI::Vector&, dealii::ComponentMask const&)
> #1  cmake-build-debug/step-3: Step3::setup_system()
> #2  cmake-build-debug/step-3: Step3::run()
> #3  cmake-build-debug/step-3: main
> --------------------------------------------------------
> 
> Calling MPI_Abort now.
> To break execution in a GDB session, execute 'break MPI_Abort' before 
> running. You can also put the following into your ~/.gdbinit:
>   set breakpoint pending on
>   break MPI_Abort
>   set breakpoint pending auto
> [murerpc:11716] 1 more process has sent help message help-mpi-api.txt / 
> mpi-abort
> [murerpc:11716] Set MCA parameter "orte_base_help_aggregate" to 0 to see all 
> help / error messages
> 
> Process finished with exit code 255
> 
> I have tried many solutions, even initializing the vector as the filed 
> solution, so I have excess space, but to no avail.
> 
> What am I missing?
> 
> Thanks for any help you can provide!
>    Franco
> 
> 
> On Thursday, July 16, 2020 at 8:39:00 PM UTC+2 luca....@gmail.com wrote:
> Ciao Franco, 
> 
> ParticleHandler was built with the intention of “losing” particles whenever 
> they fall out of the domain. This is the default behaviour of the 
> ParticleHandler, and unless you store a pointer to an existing particle 
> somewhere, you should not be in trouble. 
> 
> If you iterate over the particles using the ParticleHandler, you should be 
> safe. The Utilities::interpolate_on_field cannot know how many particles are 
> actually there, so it asks the ParticleHandler for the 
> next_available_particle_index(), which should not change, even if you drop 
> some particles. 
> 
> The way the interpolator works, is by looping *through* the ParticleHandler 
> (again, on the safe side), and deciding what to store where based on the 
> current particle id. Unless you renumber and reset all particle ids between 
> the removal and the interpolation, this will remain consistent. 
> 
> Luca. 
> 
> > On 16 Jul 2020, at 13:47, franco.m...@gmail.com <franco.m...@gmail.com> 
> > wrote: 
> > 
> > Thanks, Luca. 
> > 
> > I have read the documentation, that's why I was skeptic of my own code: I 
> > still think I am doing it wrong. 
> > 
> > As far as I understand from the documentation, particles should be removed 
> > from the handler after updating the cache, and thus the interpolated field 
> > should not contain the old removed one. Unfortunately I don't see any 
> > mention of "removal" in Particles::Utilities, but I could overlooking 
> > something here. 
> > 
> > Another question relative to the removal part is more of a worry. If I 
> > remove a particle, will any iterator be invalidated? If so, and say I need 
> > to remove a lot of particles, what is the best way to do this without too 
> > much overhead? As I understand, I have to remove (just one?) a particle and 
> > call a cache update at the end of the removal process. 
> > 
> > Thanks! 
> > Franco 
> > On Wednesday, July 15, 2020 at 7:47:58 PM UTC+2 luca....@gmail.com wrote: 
> > Franco, 
> > 
> > The interpolated field says that the field value is zero (on the line above 
> > your arrow). This is how it is documented: if a particle is removed, its 
> > interpolated value is left unchanged in the target vector. Zero in your 
> > case. 
> > 
> > Luca 
> > 
> >> Il giorno 15 lug 2020, alle ore 18:18, Franco Milicchio 
> >> <franco.m...@gmail.com> ha scritto: 
> >> 
> >>  
> >> Dear all, 
> >> 
> >> I am playing with 9.2 and particles, but I've encountered a weird problem. 
> >> If I remove a particle, the interpolated field on particle locations 
> >> yields a wrong answer. 
> >> 
> >> My code does (now) everything by hand: 
> >> 
> >> // Here be drangons... 
> >> 
> >> Point<2> p_0(0.1,0.5); 
> >> auto ref_cell_0 = GridTools::find_active_cell_around_point(dof_handler, 
> >> p_0); 
> >> Point<2> ref_p_0 = 
> >> StaticMappingQ1<2>::mapping.transform_real_to_unit_cell(ref_cell_0, p_0); 
> >> 
> >> Point<2> p_1(0.5,0.5); 
> >> auto ref_cell_1 = GridTools::find_active_cell_around_point(dof_handler, 
> >> p_1); 
> >> Point<2> ref_p_1 = 
> >> StaticMappingQ1<2>::mapping.transform_real_to_unit_cell(ref_cell_1, p_1); 
> >> 
> >> Point<2> p_2(0.9,0.5); 
> >> auto ref_cell_2 = GridTools::find_active_cell_around_point(dof_handler, 
> >> p_2); 
> >> Point<2> ref_p_2 = 
> >> StaticMappingQ1<2>::mapping.transform_real_to_unit_cell(ref_cell_2, p_2); 
> >> 
> >> particle_handler.insert_particle(Particles::Particle<2>(p_0, ref_p_0, 0), 
> >> ref_cell_0); 
> >> particle_handler.insert_particle(Particles::Particle<2>(p_1, ref_p_1, 1), 
> >> ref_cell_1); 
> >> particle_handler.insert_particle(Particles::Particle<2>(p_2, ref_p_2, 2), 
> >> ref_cell_2); 
> >> 
> >> particle_handler.update_cached_numbers(); 
> >> 
> >> std::cout << "Printing particle id, location and reference location" << 
> >> std::endl; 
> >> for(auto &p:particle_handler) 
> >> { 
> >> std::cout << "id: " << p.get_id() << " loc: " << p.get_location() << " 
> >> ref_loc: " << p.get_reference_location() << std::endl; 
> >> } 
> >> 
> >> field_on_particles.reinit(particle_handler.n_global_particles()); 
> >> Particles::Utilities::interpolate_field_on_particles(dof_handler, 
> >> particle_handler, field, field_on_particles); 
> >> 
> >> std::cout << "Printing field value on particle location" << std::endl; 
> >> for(int pp = 0; pp< particle_handler.n_global_particles(); pp++) 
> >> { 
> >> std::cout << "fluid id: " << pp << " value: " << field_on_particles[pp] << 
> >> std::endl; 
> >> } 
> >> 
> >> // ************************ 
> >> // NOW REMOVE A PARTICLE... 
> >> // ************************ 
> >> 
> >> for(auto itr=particle_handler.begin(); itr != particle_handler.end(); 
> >> itr++) 
> >> { 
> >> if(itr->get_id()==1) particle_handler.remove_particle(itr); 
> >> } 
> >> 
> >> particle_handler.update_cached_numbers(); 
> >> 
> >> 
> >> // *********************** 
> >> // ...AND VALUES ARE WRONG 
> >> // *********************** 
> >> 
> >> std::cout << "Printing particle id, location and reference location after 
> >> removing particle" << std::endl; 
> >> 
> >> for(auto itr_2=particle_handler.begin(); itr_2 != particle_handler.end(); 
> >> itr_2++) 
> >> { 
> >> std::cout << "id: " << itr_2->get_id() << " loc: " << 
> >> itr_2->get_location() << " ref_loc: " << itr_2->get_reference_location() 
> >> << std::endl; 
> >> } 
> >> 
> >> field_on_particles.reinit(particle_handler.n_global_particles()); 
> >> Particles::Utilities::interpolate_field_on_particles(dof_handler, 
> >> particle_handler, field, field_on_particles); 
> >> 
> >> 
> >> std::cout << "Printing field value on particles location after removing 
> >> particle" << std::endl; 
> >> 
> >> for(int ppp = 0; ppp < particle_handler.n_global_particles(); ppp++) 
> >> { 
> >> std::cout << "fluid id: " << ppp << " value: " << field_on_particles[ppp] 
> >> << std::endl; 
> >> } 
> >> 
> >> The scalar field is generated with the X component, so its range is [0-1], 
> >> and as you can see before the removal the field is perfect. After, though, 
> >> I have weird numbers (forgive the ugly code and the useless reinit): 
> >> 
> >> 
> >> Number of active cells: 1024 
> >> Number of degrees of freedom: 1089 
> >> Printing particle id, location and reference location 
> >> id: 0 loc: 0.1 0.5 ref_loc: 0.2 1 
> >> id: 1 loc: 0.5 0.5 ref_loc: 1 1 
> >> id: 2 loc: 0.9 0.5 ref_loc: 0.8 1 
> >> Printing field value on particles location 
> >> fluid id: 0 value: 0.1 <--------------- OK! 
> >> fluid id: 1 value: 0.5 <--------------- OK! 
> >> fluid id: 2 value: 0.9 <--------------- OK! 
> >> Printing particle id, location and reference location after removing 
> >> particle 
> >> id: 0 loc: 0.1 0.5 ref_loc: 0.2 1 
> >> id: 2 loc: 0.9 0.5 ref_loc: 0.8 1 
> >> Printing field value on particles location after removing particle 
> >> fluid id: 0 value: 0.1 <---------------------- STILL OK, I HOPE IT IS 
> >> fluid id: 1 value: 0. <---------------------- ???? 
> >> 
> >> Am I using the particles the way they're not intended? Any hints are 
> >> really welcome! 
> >> 
> >> Thanks! 
> >> Franco 
> >> 
> >> PS. I am attaching a ZIP file with a minimal working code adapting the 
> >> step 3 of the tutorial. 
> >> 
> >> 
> >> 
> >> -- 
> >> The deal.II project is located at http://www.dealii.org/ 
> >> For mailing list/forum options, see 
> >> https://groups.google.com/d/forum/dealii?hl=en 
> >> --- 
> >> You received this message because you are subscribed to the Google Groups 
> >> "deal.II User Group" group. 
> >> To unsubscribe from this group and stop receiving emails from it, send an 
> >> email to dealii+un...@googlegroups.com. 
> >> To view this discussion on the web visit 
> >> https://groups.google.com/d/msgid/dealii/e7ff5cd5-6866-414c-8836-9890ddfc2fd7o%40googlegroups.com.
> >>  
> >> <step-3.zip> 
> > 
> > -- 
> > The deal.II project is located at http://www.dealii.org/ 
> > For mailing list/forum options, see 
> > https://groups.google.com/d/forum/dealii?hl=en 
> > --- 
> > You received this message because you are subscribed to the Google Groups 
> > "deal.II User Group" group. 
> > To unsubscribe from this group and stop receiving emails from it, send an 
> > email to dealii+un...@googlegroups.com. 
> > To view this discussion on the web visit 
> > https://groups.google.com/d/msgid/dealii/7c4738a4-aeae-4710-9a9d-35011aaedd85n%40googlegroups.com.
> >  
> 
> 
> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> --- 
> You received this message because you are subscribed to the Google Groups 
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/dealii/1f25a5a4-f2ad-4d71-8b12-f4f71e74ab36n%40googlegroups.com.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/A40D284F-0EBC-4021-8BBD-D5241097568D%40gmail.com.

Reply via email to