Certainly that is a small mesh and shouldn't be a bottleneck compared to anything else going on. However, we'd need many more details about your setup to help diagnose the problem. I'll mention that for questions regarding performance, you'll want to make sure you're using METHOD=opt (and linking against libmesh_opt.so) which will turn off all the debugging checks.
On Sun, May 12, 2019 at 8:46 AM Renato Poli <rebp...@gmail.com> wrote: > Thanks. > I will try different partitioners. > I moved away from the virtual environment, anyway. > > Rgds, > Renato > > On Sun, May 12, 2019 at 9:42 AM Kirk, Benjamin (JSC-EG311) < > benjamin.k...@nasa.gov> wrote: > > > There is some Hilbert space filling curve indexing that gets invoked with > > the default partitioner, and who knows in a virtual environment. Also > it’s > > worth trying a different partitioner. You can set the partitioner to any > > supported type, but I’ve not got the documentation handy at the moment. > > > > > > > > ------------------------------ > > On: 11 May 2019 21:09, "Renato Poli" <rebp...@gmail.com> wrote: > > > > Hi > > > > It seems that partitioning is taking a lot of time. > > If I skip it, it runs much faster. >> mesh.skip_partitioning(true); > > Does that make any sense? > > > > Renato > > > > > > On Sat, May 11, 2019 at 10:59 PM Renato Poli <rebp...@gmail.com> wrote: > > > >> Hi Kirk, > >> > >> I see there is something related to the parallelization. > >> I am using mpirun.mpich. > >> With a single processor, it runs much faster than with 4 processors. > >> Please find data below. > >> > >> Why parallel reading would be so slower? > >> Any suggestions? > >> > >> XDR - 1 processor > >> # Stopwatch "LibMesh::read": 12.7637 s > >> XDR - 4 processors > >> # Stopwatch "LibMesh::read": 135.473 s > >> EXO - 1 processor > >> # Stopwatch "LibMesh::read": 0.294671 s > >> EXO - 4 processrs > >> # Stopwatch "LibMesh::read": 198.897 s > >> > >> This is the mesh: > >> ====== > >> Mesh Information: > >> elem_dimensions()={2} > >> spatial_dimension()=2 > >> n_nodes()=40147 > >> n_local_nodes()=40147 > >> n_elem()=19328 > >> n_local_elem()=19328 > >> n_active_elem()=19328 > >> n_subdomains()=1 > >> n_partitions()=1 > >> n_processors()=1 > >> n_threads()=1 > >> processor_id()=0 > >> > >> > >> On Sat, May 11, 2019 at 7:15 PM Renato Poli <rebp...@gmail.com> wrote: > >> > >>> Thanks. > >>> I am currently running on a virtual machine - not sure mpi is getting > >>> along with that. > >>> I will try other approaches and bring more information if necessary. > >>> > >>> rgds, > >>> Renato > >>> > >>> On Sat, May 11, 2019 at 5:49 PM Kirk, Benjamin (JSC-EG311) < > >>> benjamin.k...@nasa.gov> wrote: > >>> > >>>> Definitely not right, but that seems like something in your machine or > >>>> filesystem. > >>>> > >>>> You can use the “meshtool-opt” command to convert it to XDR and try > >>>> that for comparison. We’ve got users who routinely read massive > meshes with > >>>> ExodusII, so I’m skeptical of a performance regression. > >>>> > >>>> -Ben > >>>> > >>>> > >>>> > >>>> > >>>> ------------------------------ > >>>> On: 11 May 2019 15:24, "Renato Poli" <rebp...@gmail.com> wrote: > >>>> > >>>> Hi > >>>> > >>>> I am reading in a mesh of 20.000 elements. > >>>> I am using Exodus format. > >>>> It takes up to 4 minutes. > >>>> Is that right? > >>>> How can I enhance performance? > >>>> > >>>> Thanks, > >>>> Renato > >>>> > >>>> _______________________________________________ > >>>> Libmesh-users mailing list > >>>> Libmesh- <Libmesh-users@lists.source>us...@lists.sourceforge.net > >>>> < > https://urldefense.proofpoint.com/v2/url?u=http-3A__forge.net&d=DwMFaQ&c=ApwzowJNAKKw3xye91w7BE1XMRKi2LN9kiMk5Csz9Zk&r=cL6XMjnReoBDeWspbxyIhOHmg_O4uY2LnJOBmaiGPkI&m=vIvYF97KzwC0P9yVGHmF5v40OVXd4xvJtJNPxXxB7yU&s=xd3JWceEFFMudAJy6xr6KaQVBlwoV36Vg2s6R5w0R_k&e= > > > >>>> > >>>> > https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_libmesh-2Dusers&d=DwICAg&c=ApwzowJNAKKw3xye91w7BE1XMRKi2LN9kiMk5Csz9Zk&r=cL6XMjnReoBDeWspbxyIhOHmg_O4uY2LnJOBmaiGPkI&m=g9IqMQsNW7V7TetwCxIPRjeT6JqEPzvKRkxJgwL-sl8&s=LvY_-3DUqPYdv-F9qjoCV95-Em1ASG0AXQvo6KvU308&e= > >>>> > >>>> > > _______________________________________________ > Libmesh-users mailing list > Libmesh-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/libmesh-users > _______________________________________________ Libmesh-users mailing list Libmesh-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/libmesh-users