Hi, thanks for the hints. At the end I did it with the Triangle library, its probably not very performing but I'm not interested on that at the moment. The main routines are the following - save_mesh: it saves the current mesh (using Mesh::copy_nodes_and_elements) and creates a system with this mesh and the same variables of the system I'm currently using. I don't need any vector in practice. - add_remove_node; it takes the error estimators and adds/removes nodes to the the mesh accordingly to the error per element and the tolerance - new_Delaunay_mesh: rebuilds the mesh from scratch, using the new node's set - project_on_new_delaunay_mesh: this takes two vectors new_v and old_v. One with dofs on the new mesh and the other on the old mesh. Thanks to the system created in the save_mesh method I'm able to create a MeshFunction and project the old vector onto the new mesh. - Finally I've overloaded the System::project_vector (const NumericVector<Number>& old_v, NumericVector<Number>& new_v, int is_adjoint) const; method and replaced the line Threads::parallel_for (active_local_elem_range, FEMProjector(*this, f, &g, setter, vars)); with project_on_new_delaunay_mesh(old_vector,new_vector);
The algorithm is: - call save_mesh - call add_remove_node - call new_Delaunay_mesh - call equation_systems.reinit(), which will call project_vector and project_on_new_delaunay_mesh. The most expensive part are the project_on_new_delaunay_mesh and new_Delaunay_mesh methods, save_mesh is negligible. If you're interested I could set up a running example and send it to you. Giacomo On 03/30/2017 03:43 PM, Roy Stogner wrote: > > On Thu, 30 Mar 2017, Giacomo Rosilho de Souza wrote: > >> I was wondering if theres an algorithm in libmesh that refines the mesh >> without creating hanging nodes? > > Adaptively, I assume? > > Paul mentioned mesh redistribution, which can be surprisingly > effective for adapting into layers but which is hard to use for > heavy adaptivity without distorting element shapes too badly. If you > want to go down this route then I'd suggest looking at > VariationalMeshSmoother, which can take error estimation data to try > and produce an adaptively sized mesh as it smooths. > >> If not, would it be difficult to implement one? > > The other obvious solution is to start with an adaptively refined mesh > with hanging nodes, then replace the non-conforming bits with > triangles to get an equivalently-refined conforming mesh. That > wouldn't be too difficult to write: take a look at MeshModification > again; flatten() and all_tri() won't do what you want but they're > similar enough that they'd be good tutorials for you to start from. > > The trouble with this is that you wouldn't be able to preserve the > mesh hierarchy, so you'd have to do your AMR/C cycles on the > non-conforming mesh and then only get a conforming mesh at the very > end. Not sure whether that's good enough for you or not. > > The last natural solution would be to use a triangle or tet mesh and > do refinement via edge bisection. That would work great if you only > need refinement, but not so well for refinement with coarsening, > because again you'd be unable to save the mesh hierarchy; libMesh > makes too many implicit assumptions that are incompatible with even > simple anisotropic refinement. > > > If you end up using VariationalMeshSmoother we'd be thrilled to > receive an example or even just a unit test for the library; there's > basically no test coverage on it right now IIRC. Likewise if you > write a MeshModification::flatten_conforming() or an edge-bisection > AMR code those would be great additions. > --- > Roy ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ Libmesh-users mailing list Libmesh-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/libmesh-users