On Fri, 22 Aug 2014 20:59:33 +0100
"Garth N. Wells" <[email protected]> wrote:

> 
> On 22 Aug 2014, at 13:50, Jan Blechta <[email protected]>
> wrote:
> 
> > Why not having list_timings(comm=COMM_SELF) so that
> > MPI.sum(comm, ...) is called on data? list_timings() would do the
> > same as it does now, list_timings(COMM_WORLD) would serve for other
> > obvious use case.
> > 
> 
> This doesn’t seem very general. 

Yes, I know. Point was that just simple use cases which are the most
often

 1. User wants cumulative timing - list_timings(comm=COMM_WORLD)
 2. User wants process-wise timing - list_timings(comm=COMM_SELF)

could be handled using this. But maybe 1. would be too fragile.

Jan

> 
> I’d like something more generic, maybe with objects having their own
> timer object that can be turned on/off. Higher level objects can
> collate data from lower level objects, e.g. linear solvers and
> assemblers will have their own timers, and the higher level ‘solve’
> objects can collate this data. If the timer belongs to an object, we
> don’t have the issue with MPI communicators.
> 
> We should look at how PETSc and Trilinos handle timing, e.g.
> http://trilinos.org/docs/r11.10/packages/teuchos/doc/html/classTeuchos_1_1TimeMonitor.html
> 
> >    parameters['stdout_all_processes'] = False
> >    list_timings(COMM_WOLRD)
> >    parameters['stdout_all_processes'] = True
> > would ensure that only one process prints it out which is not so
> > nice. But maybe it can be improved somehow.
> > 
> > Why returning a result as XML file rather than
> > dict / std::map<std::sting, std::vector> ? User would not need
> > to manipule data using some XML library.
> > 
> 
> Exactly, XML will allow the user to manipulate the data in a plotting
> program, e.g. Matplotlib, rather than screen scraping.
> 
> Garth
> 
> 
> > Jan
> > 
> > 
> > On Fri, 22 Aug 2014 13:26:43 +0200
> > Mikael Mortensen <[email protected]> wrote:
> > 
> >> I understand that multiple communicators may be problematic, but I
> >> still want to know how much time one single call to assemble takes,
> >> regardless the number of CPUs. And even though it sure is
> >> interesting to know the timings of each individual process, the
> >> total time is still what is being used for benchmarking and to
> >> measure scaling.
> >> 
> >> Could it be an option to have a "list_timings_single_communicator",
> >> that works just like it used to? I’ve looked at the code, but
> >> honestly I find it a tad hard to follow what is actually going on
> >> between all the loggings and the logmanagers. Right now it is
> >> easier for me to write a few extra timing lines in Python than it
> >> is to manipulate list_timings tables.
> >> 
> >> +1 for exporting timing data to XML files.
> >> 
> >> Mikael
> >> 
> >> 
> >> On 21 Aug 2014, at 15:33, Garth N. Wells <[email protected]> wrote:
> >> 
> >>> 
> >>> 
> >>> On Thu, 21 Aug, 2014 at 2:18 PM, Mike Welland
> >>> <[email protected]> wrote:
> >>>> Could list_timings() return a data structure so that the user can
> >>>> implement their own statistics across processors (eg: Max / min &
> >>>> mean)?
> >>> 
> >>> This is worth thinking about. We've been considering something
> >>> related, which is exporting timing data to XML files so a user can
> >>> manipulate and plot the data.
> >>> 
> >>> Garth
> >>> 
> >>> 
> >>>> On Thu, Aug 21, 2014 at 7:26 AM, Garth N. Wells <[email protected]>
> >>>> wrote:
> >>>>> On Mon, 18 Aug, 2014 at 8:50 PM, Mikael Mortensen
> >>>>> <[email protected]> wrote:
> >>>>>> Hi
> >>>>>> In the latest version of dolfin list_timings() is printing out
> >>>>>> one table of results for each processor and complete timings
> >>>>>> are not reported. I just wanted to know if this has been a
> >>>>>> conscious choice or whether it is a bug? Until recently it
> >>>>>> printed just one table even in parallel. Not quite sure when
> >>>>>> it changed though…
> >>>>> It's a choice, but not an optimal solution. We can't assume that
> >>>>> all objects are on the same communicator, so we need something
> >>>>> cleverer for reporting timing in parallel. Garth
> >>>>>> Best regards
> >>>>>> Mikael
> >>>>>> _______________________________________________
> >>>>>> fenics mailing list
> >>>>>> [email protected]
> >>>>>> http://fenicsproject.org/mailman/listinfo/fenics
> >>>>> _______________________________________________
> >>>>> fenics mailing list
> >>>>> [email protected]
> >>>>> http://fenicsproject.org/mailman/listinfo/fenics
> >>> 
> >> 
> >> _______________________________________________
> >> fenics mailing list
> >> [email protected]
> >> http://fenicsproject.org/mailman/listinfo/fenics
> > 
> 
> _______________________________________________
> fenics mailing list
> [email protected]
> http://fenicsproject.org/mailman/listinfo/fenics

_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to