Hi Johan, Thanks for that pointer. It's (for obvious and good reasons) focussed on the intra-node performance of assembly, which means that the assembly benchmarks only go up to 16 cores (I feel horribly hypocritical here because we do a lot of those sort of comparisons in PyOP2 as well). It doesn't provide any evidence that Dolfin is capable of employing lots of processors.
Cheers, David On 4 September 2013 10:46, Johan Hake <[email protected]> wrote: > It is just a report, but I guess it answers most of your questions. > > https://t.co/1dPJZYZRUa > > Garth and Chris are the obvious goto here. > > Johan > > > On Wed, Sep 4, 2013 at 11:39 AM, David Ham <[email protected]>wrote: > >> Hi All, >> >> I'm writing responses to reviewers for the FEniCS on manifolds paper >> which Marie, Colin, Andrew and I submitted to GMD. One of the reviewers is >> giving us grief about not including parallel scaling results. The reason we >> haven't done so is that we were just modifying FFC, which is basically >> orthogonal to the parallel performance problem, so what I would like to be >> able to do is point them to the existing Dolfin parallel performance and >> claim we haven't done anything which would change that. Unfortunately, I >> haven't so far located a published result which shows Dolfin's parallel >> performance. The FEniCS book documents the existence of shared and >> distributed memory parallel in chapter 10, but doesn't give performance >> results. >> >> Any suggestions? >> >> Many thanks, >> >> David >> >> -- >> Dr David Ham >> Department of Computing >> Imperial College London >> >> http://www.imperial.ac.uk/people/david.ham >> >> _______________________________________________ >> fenics mailing list >> [email protected] >> http://fenicsproject.org/mailman/listinfo/fenics >> >> > -- Dr David Ham Department of Computing Imperial College London http://www.imperial.ac.uk/people/david.ham
_______________________________________________ fenics mailing list [email protected] http://fenicsproject.org/mailman/listinfo/fenics
