Re: [isabelle-dev] Using PolyML's memory consumption profiling on Isabelle

2011-10-14 Thread Andreas Schropp

On 10/13/2011 01:24 PM, Thomas Sewell wrote:

Good day all. Just wanted to let the Isabelle developers know about the latest 
feature David Matthews has added to PolyML, and to let you all know how useful 
it is.

The feature allows profiling of objects after garbage collection. When code is 
compiled with PolyML.compiler.allocationProfiling set to 1, all objects 
allocated are also given a pointer to their allocating function. When the 
garbage collector runs with PolyML.compiler.profiling set to 4, a statistical 
trace is printed of which objects survived garbage collection.
   


Cool.

So we have:
  profiling = 1 approximates runtime Monte-Carlo style sampling of the 
program counter
  profiling = 2 records the number of words allocated in each function 
(very accurate IIRC)

  profiling = 3 ???
  profiling = 4 counts GC survivors (very accurately?)



This means that for Isabelle we get our first real look at what is taking up 
space at runtime and in images.

I include the last 20 lines of traces produced at four interesting points
   1) After including the HOL theories in HOL's ROOT.ML
   2) After performing a PolyML.shareCommonData after (1)
   3) After adding a 400-element record to a test theory built on the HOL image.
   4) After performing a shareCommonData after (3)
   


These are traces for profiling=4?


Isabelle is generating a *lot* of copies of types  terms, particularly via
Term.map_atyps. Since shareCommonData eliminates them, many are
duplicates. It's possible that further use of the same variants from
Term_Subst might help. It's also possible that the repeated reconstruction is
necessary (e.g. repeated generalization/abstraction of type variables) and
further use of the new Term_Sharing mechanism might be the answer.
   


The way I learned Isabelle programming one views term-traversal as being 
cheap,
which seems to be true most of the time especially when they are freshly 
allocated
with nice locality properties. Sharing lots of subterms might interfere 
with this.

Isn't this what GC was made for? Why introduce artificial sharing?

BTW: the Coq kernel does huge amounts of sharing IIRC.
Should we be concerned or is this just a thing because of proof terms?

Makarius, please comment on this, because now I feel like a wasteful
programmer. ;D



A large quantity of the persistent objects are Table and Net objects, as
expected.

There are surprisingly many dummy tasks.
   


What is a dummy task?


A surprisingly large quantity of the persistent objects are associated with
proof terms and name derivations. This is presumably not Thm.name_derivation
but inlined code from its subcalls Proofterm.thm_proof, proof_combP,
proof_combt' and Library.foldl, none of which are listed. If indeed these foldl
loops are producing this many objects then perhaps the work done unconditionally
here should be rethought.
   


For proofs=0?
Taking a guess these might be the PBody thms pointers.

Cheers,
  Andy


___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Using PolyML's memory consumption profiling on Isabelle

2011-10-14 Thread Andreas Schropp

On 10/13/2011 01:24 PM, Thomas Sewell wrote:

There are surprisingly many dummy tasks.
[...]
 918632 Task_Queue.dummy_task(1)
   


val dummy_task = Task(NONE, ~1)

Values are not shared?! What the hell?

___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Using PolyML's memory consumption profiling on Isabelle

2011-10-14 Thread Makarius

On Fri, 14 Oct 2011, Andreas Schropp wrote:


On 10/13/2011 01:24 PM, Thomas Sewell wrote:

There are surprisingly many dummy tasks.
[...]
 918632 Task_Queue.dummy_task(1)



Such numbers always need to be put in relation.  The original list was 
like that:



   918632 Task_Queue.dummy_task(1)
...
 13085440 Term.map_atyps(2)


This means there are 2 orders of magnitude compared to the top entry. 
Even if such top entries are reduced significantly, the overall impact is 
very low on average.  Addressing the lower entries is normally not worth 
the effort.




val dummy_task = Task(NONE, ~1)

Values are not shared?! What the hell?


This looks like an older version.  Thomas was referring to this one in 
Isabelle/73dde8006820:


fun dummy_task () =
  Task {group = new_group NONE, name = , id = 0, pri = NONE, timing = 
new_timing ()};

Since the timing is a mutable variable here, it has to be created afresh 
for each use -- in Future.value construction.  Normally 1 million extra 
allocations are not a big deal, but an experiment from yesterday shows 
that there is in fact a measurable impact.  See now Isabelle/2afb928c71ca 
and the corresponding charts at 
http://www4.in.tum.de/~wenzelm/test/stats/at-poly.html


I can only guess that allocation of mutable stuff costs extra.


Anyway, that is just a peophole optimization.  The real improvements are 
usually coming from looking at the big picture.  The very introduction of 
the dummy tasks for pre-evaluated future values was one such optimization. 
Another one the introduction of the timing field for tasks to improve the 
overall scheduling and throughput of the worker thread farm that crunches 
on these tasks.



Makarius
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Using PolyML's memory consumption profiling on Isabelle

2011-10-14 Thread David Matthews

On 14/10/2011 09:13, Andreas Schropp wrote:

On 10/13/2011 01:24 PM, Thomas Sewell wrote:

Good day all. Just wanted to let the Isabelle developers know about
the latest feature David Matthews has added to PolyML, and to let you
all know how useful it is.

The feature allows profiling of objects after garbage collection. When
code is compiled with PolyML.compiler.allocationProfiling set to 1,
all objects allocated are also given a pointer to their allocating
function. When the garbage collector runs with
PolyML.compiler.profiling set to 4, a statistical trace is printed of
which objects survived garbage collection.


Cool.

So we have:
profiling = 1 approximates runtime Monte-Carlo style sampling of the
program counter
profiling = 2 records the number of words allocated in each function
(very accurate IIRC)
profiling = 3 ???
profiling = 4 counts GC survivors (very accurately?)


Profiling 3 is the number of cases where the run-time system had to 
emulate an arithmetic operation because the operation required 
long-precision arithmetic.  This is a LOT more expensive than doing the 
arithmetic with short precision ints so it may be worth recoding 
hot-spots that show up with this.


Profiling 4 has just been added so it probably has teething-troubles.

I would really prefer to replace these numbers by a datatype so that 
users don't have to remember numbers.


Regards,
David

___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Using PolyML's memory consumption profiling on Isabelle

2011-10-14 Thread David Matthews

On 14/10/2011 10:56, Makarius wrote:

On Fri, 14 Oct 2011, Andreas Schropp wrote:

val dummy_task = Task(NONE, ~1)

Values are not shared?! What the hell?


Datatypes and tuples that contain only constant data are created once 
during compilation.



This looks like an older version. Thomas was referring to this one in
Isabelle/73dde8006820:

fun dummy_task () =
Task {group = new_group NONE, name = , id = 0, pri = NONE, timing =
new_timing ()};

Since the timing is a mutable variable here, it has to be created afresh
for each use -- in Future.value construction. Normally 1 million extra
allocations are not a big deal, but an experiment from yesterday shows
that there is in fact a measurable impact. See now Isabelle/2afb928c71ca
and the corresponding charts at
http://www4.in.tum.de/~wenzelm/test/stats/at-poly.html

I can only guess that allocation of mutable stuff costs extra.


Allocation of a mutable, at least a fixed-size mutable such as ref, 
doesn't cost any more than allocating an immutable.  However, if a 
mutable survives a GC it has an impact on subsequent GCs.  The worst 
case would be a mutable that survived one GC and then becomes 
unreachable.  It would continue to be scanned in every partial GC until 
it was thrown away by the next full GC.  Does this correspond with what 
you've found?


Regards,
David
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Using PolyML's memory consumption profiling on Isabelle

2011-10-14 Thread Makarius

On Fri, 14 Oct 2011, David Matthews wrote:


I can only guess that allocation of mutable stuff costs extra.


Allocation of a mutable, at least a fixed-size mutable such as ref, 
doesn't cost any more than allocating an immutable.  However, if a 
mutable survives a GC it has an impact on subsequent GCs.  The worst 
case would be a mutable that survived one GC and then becomes 
unreachable.  It would continue to be scanned in every partial GC until 
it was thrown away by the next full GC. Does this correspond with what 
you've found?


Yes, I was thinking in terms of the survival of the mutable, not the 
initial allocation.  What happened in the example is that any Future.value 
(which is conceptually immutable) would retain a mutable field for timing 
information that is reachable but semantically never used later on.


Thus it somehow impacts later memory management indirectly, leading to the 
measurable (but very small) overhead.



Makarius
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev