On Fri, Sep 16, 2011 at 2:01 PM, Florian Haftmann <[email protected]> wrote: > Hi Brian, > > I am not totally happy with changes like: > > http://isabelle.in.tum.de/reports/Isabelle/rev/4657b4c11138#l1.7 > > The proof text itself is shorter than before, but in less trivial > examples it produces head ache to instantiate foundational theorems of > locale with OF. Indeed, huge parts of Finite_Set.thy once were written > in that style. The disadvantage there was that when you had to > instantiate over a hierarchy of locales, you would find yourselves > writing ... OF [... OF [...]] cascades. From this I have drawn the > conclusion that it is better to leave module system (locale + > interpretation) and propositional system (_ ==> _ + _ [OF ...]) on their > own, even if both are embedded into the same calculus.
Indeed, the proof script is a bit shorter, but that was not my primary motivation for this change. For quite some time, my favored metric for proof scripts has been not length, but overall execution time. By this measure, most "interpret" commands within proof scripts are extraordinarily expensive. Interpretations for some locales (like "semigroup" or "monoid" in Groups.thy) are fast, because they are intentionally kept small, few lemmas are declared with attributes, etc. But for locales that come from type classes like "complete_lattice", the "interpret" command does a huge amount of wasteful work: It has to generate copies of *every* lemma proved within the class context, reprocess *all* the attribute declarations, etc. The sheer quantity of duplicate-simp-rule warnings that come from these "interpret" commands is an indicator of all the wasted work going on. On the other hand, I understand your comments about how the locale+interpretation proof style is good because it maintains a layer of abstraction: To understand the original proof scripts that use "interpret", users don't have to know that locales are implemented with predicates and conditional theorems. My updated proof scripts break this abstraction. To get the best of both worlds (abstraction + decent, scalable performance) it seems like it would be useful to have a more lightweight alternative to the full-on locale interpretation mechanism. Perhaps we could have a sort of "lazy" interpretation that would let you do something like this: context complete_lattice begin lazy_interpretation dual: complete_lattice Sup Inf sup "op \<ge>" "op >" inf \<top> \<bottom> by (fact dual_complete_lattice) lemma Sup_bot_conv [simp, no_atp]: "\<Squnion>A = \<bottom> \<longleftrightarrow> (\<forall>x\<in>A. x = \<bottom>)" "\<bottom> = \<Squnion>A \<longleftrightarrow> (\<forall>x\<in>A. x = \<bottom>)" by (rule dual.Inf_top_conv)+ end The idea is that you wouldn't actually generate any theorems or process any attributes when the interpretation is declared; the rule "dual.Inf_top_conv" would instead be generated on-the-fly at the point the name is used. (To be honest, when I first learned about locale interpretations, this is actually how I assumed that they worked, until I learned otherwise.) - Brian _______________________________________________ isabelle-dev mailing list [email protected] https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
