On Thu, 24 Jan 2008, Wolfgang Bangerth wrote: >>> Now let's see what our friends from the competition have to say ;-) >> >> Competition? Does this mean you've added triangles, tets, prisms and >> pyramids while we weren't looking? ;-) > > :-) I guess I could come up with other things instead ;-)
What, first library to add NURBS wins? >> libMesh just hooks to PETSc and LASPACK for sparse linear algebra, >> whereas deal.II has its own multithreaded linear solvers (which IIRC >> were more efficient than PETSc?) for shared memory systems. > > Yes. Maybe more importantly (because it scales very nicely) we also > assemble linear systems in parallel on multicore systems. Just to make sure it's clear, so does libMesh, just by using separate processes instead of separate threads. The assembly time scales like 1/N with number of cores; our problem is just that the SerialMesh memory usage scales like N. >> SerialMesh, one for each process. Can deal.II mix multithreading with >> PETSc? > > For assembling yes (which also works on individual cluster nodes, of > course). For solvers we just hand things over to PETSc. > > I hear PETSc has some magic flag that lets it run multithreaded but I > don't know how to turn that on. We ought to figure that out. Multi-level shared+distributed memory systems are here to stay. The alternative to threading is to use multiple processes which allocate blocks of shared memory, but that's 99% as hard to debug and about 400% harder to code. It's probably less portable, too; I think differing shared memory APIs were part of the SysV/BSD rift. --- Roy ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ _______________________________________________ Libmesh-users mailing list Libmesh-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/libmesh-users