On 18/04/2012, at 12:29 AM, Makarius wrote:

> On Tue, 17 Apr 2012, Gerwin Klein wrote:
> 
>> - I would like to add a size limit to records beyond which no code generator 
>> setup is performed. The main reason is that on sizes > 200 fields or so, the 
>> setup does not make any sense, but consumes very large amounts of memory 
>> (and time). Switching it off gets rid of almost all of the mysterious memory 
>> blowup that we had and enables us to run our proofs within 4GB on Linux 
>> (32bit) and ~8GB on Darwin (64bit). Having limits like these is not ideal, 
>> but I don't see a better workaround, because the code generator setup *is* 
>> useful for small records. There is a question on how to implement the limit:
> 
>> 1) as an option available the user at record definition time
>> 2) as an option/flag to the internal record definition function only
>> 3) as a configuration option
>> 4) as a compile time constant
>> 
>> I'm currently in favour of 4, because the limit is very specialised and will 
>> only really occur for records that are somehow automatically generated in 
>> which case the code generator setup is very unlikely to make sense. Options 
>> 1 and 3 would require adding syntax/configuration names which is not really 
>> worth it. Option 2 threads yet another obscure parameter through a rather 
>> large package.
> 
> If 3) means "configuration option" in the sense of Config.T, here is the 
> canonical 1-liner to make one for a package:
> 
>  val split_limit = Attrib.setup_config_int @{binding linarith_split_limit} (K 
> 9);

Yes, that's what I meant. It's easy to set up, but pollutes the config name 
space a little more for users. Of course it also means that it can be changed 
at runtime. If there is a slight preference for this, I'm happy to go that road.


> Concerning performance issues in general, I've recently made a lot of 
> measurements.  It seems indeed that the code generator is responsible for 
> quite a bit of it, although I did not look any further into its depths. 
> Instead I've made some performance tuning for the critical infrastructure for 
> localizing big packages with big application.

Yes, this looks very promising. After the record package/code generator find 
we're now almost done updating things to Isabelle2011-1. Just in time for 
Isabelle2012 ;-)


> Moreover, David Matthews is right now reforming the Poly/ML runtime system, 
> such that we might see an improvement of an order of magnitude soon. 

We'll be very keen on that one, too.

Cheers,
Gerwin

_______________________________________________
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev

Reply via email to