On Thu, 6 Sep 2012, Florian Haftmann wrote:

Hi Clemens, hi Makarius,

(sorry for being so little reactive on this imminent thread, but it requires considerable preheating to catch the critical points)

The preheating is definitely a problem. We somehow need to try to get a critical mass of insight into this issue, to address it eventually.

In some sense it is a leftover from our "national debts" from 2006/2007. Already back then, I was very abstractly wondering together with Clemens about the question, how certain codgeneration issues could be fully integrated into the locale + interpretation concepts, without subtraction from either side or adding just workarounds.


Just a general question: Why does the code generator require a closed constant as a starting point? Couldn't it just take an arbitrary term, with some decoration how it should be abstracted into a closed thing?

Well, it can accomplish anything you tell it to do (cf. http://dilbert.com/strips/comic/2006-01-29/). But the idea has always been to rely as closely as possible on the foundation without doing more extralogical things than necessary.

The term "foundation" was actually introduced formally in one of the local theory target reforms by yourself, e.g. see http://isabelle.in.tum.de/repos/isabelle/file/38af9102ee75/src/Pure/Isar/generic_target.ML#l292

It means that a "constant" c that depends on context parameters x y z is turned into a fountational constant "loc.c" in the background, and then re-imported into the local context as "loc.c x y z. Later it might get re-interpreted such that its dependency on former context parameters is turned into terms (by intepretation) or the whole thing "loc.c t1 t2 t3" is replaced by something else (rewriting via mixins etc.).

So when the code generator sees an interpreted function application
"loc.c t1 t2 t3 x y z" it should somehow do the right thing, in taking it as "(loc.c t1 t2 t3) x y z", and considering the inner part as "atomic entity" (and instance of c defined earlier in the abstract context).


On our running gag list with have at least these related issues:

  * codgeneration as sketched above

  * behaviour of the Simplifier on seemingly atomic "constants" c (due
    to abbreviations) that are actually of the form "loc.c x y z"

  * stability and expectedness of abbreviations, as they are printed

  * the Haftmann/Wenzel educated guess here
    
http://isabelle.in.tum.de/repos/isabelle/file/38af9102ee75/src/Pure/Isar/named_target.ML#l56
    which can be traced back here:
    http://isabelle.in.tum.de/repos/isabelle/rev/9dd61cb753ae
    (Fall/Winter 2007 was the time where we desparately tried to recover
    from the 2006/2007 gap in the Isabelle release chain).


Is there any nice, sane, simple approach to all that?

Some months ago I mentally experimented with a notion of explicit "boundaries" for locale contexts and somehow follow the structure of interpretations. This could help here, although in the first and second round of testing the hypothesis against the real system, I got deflected.
It might also turn out to be too complicated as a concept.


        Makarius
_______________________________________________
isabelle-dev mailing list
[email protected]
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev

Reply via email to