Re: [isabelle-dev] New (Co)datatypes: Status Plan (FYI)

2013-08-04 Thread Andreas Schropp

On 08/04/2013 07:09 PM, Florian Haftmann wrote:

To add a further dimension to the design space, there was once the idea
of a generalized primrec in the manner:

1. recursive specification following some pattern considered »primitive«
2. foundational definition of this specification using a suitable
recursion combinator
3. maybe some proof by the user that certain conditions a satisfied
4. deriving the original specification again

A new command »primitive_recursion« would then expose (3) to the user,
whereas the existing »primrec« would assume a trivial proof instead
(similar in appearance like the pair »function« and »fun«)


That sounds like a derivation of a primitive recursion specification over
an ad-hoc nonfree datatype that specifies the primitive pattern.

Andrei and I submitted a paper on nonfree datatypes to CPP 2013:
http://home.in.tum.de/~schropp/nonfree-data.pdf
(see appendix for recursors on finite sets)

Integration with the new datatype package is possible
by registering the nonfree datatypes as BNFs.

Cheers,
  Andy

___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Attributes and local theories

2011-11-14 Thread Andreas Schropp

On 11/13/2011 05:16 PM, Makarius wrote:

On Thu, 10 Nov 2011, Andreas Schropp wrote:
The implementation manual states that I have to treat all of them 
uniformly. This means that the attribute transformation on a lthy 
would not result in an update of the target context but of the 
auxilliary context instead. Is this correct and is the target 
guaranteed to be transformed in the same way beforehand?


Yes.  The update function provided by the user of local theory 
declarations should operate directly on the surface context as given, 
either Context.Theory or Context.Proof.  You cannot even count that it 
will be again a local theory in the application, e.g. consider 
'interpretation' or 'interpret'.


Ok.



The target is usually updated before the auxiliary context, but this 
should normally not matter, since the lookup should use the very same 
surface context.


Note that in official Isabelle2011-1 there is still some confusion 
here in Local_Theory.declaration, because it would omit the aux. 
context.  I have clarified that after the release, with some years 
delay as usual.


I guess you are referring to

61  
http://isabelle.in.tum.de/repos/isabelle/file/5c0444d2abfe/src/Pure/Isar/named_target.ML#l61
  locale_declaration  target syntax decl
  


62  
http://isabelle.in.tum.de/repos/isabelle/file/5c0444d2abfe/src/Pure/Isar/named_target.ML#l62
  #  pervasive ? Generic_Target.theory_declaration  decl
  


63  
http://isabelle.in.tum.de/repos/isabelle/file/5c0444d2abfe/src/Pure/Isar/named_target.ML#l63
  #  not pervasive ? Context.proof_map (Morphism.form decl);
  

in Named_Target.target_declaration, which now includes the non-pervasive 
case.





If in the process of a local theory transformation I want to store 
information to be looked up later, but not escaping the scope of the 
target, where should I store it? In the auxilliary context or the 
target context?


Local_Theory.target in Isabelle/11d9c2768729 has explicit option 
pervasive to say if it should go into the background theory. 


Hmm, no:
   
http://isabelle.in.tum.de/repos/isabelle/file/11d9c2768729/src/Pure/Isar/local_theory.ML

I guess you mean Local_Theory.declaration.

Anyway, these are just some generalities. 


Yes and I appreciate them. ^^

Some more:
  the idea of a non-pervasive declaration (with the new semantics) is 
that the resulting
context modifications are registered for the target (if necessary, i.e. 
in the case of locales)
and they are also applied to the auxilliary context (which is not shared 
between applications

of different morphisms and goes out of scope with the target)?

I guess this is what I need then.
BTW: this cannot easily be achieved with the old semantics
(which don't modify the aux ctxt), right?




What is your concrete application?


Not very concrete at all:
  I am animating a fact-accumulation scheme based on
rules of a certain format. ^^
This might be understood as: generalizing morphisms to
applications of user-specified propagation rules (instead of mostly
instantiations and discharges as of now), but outside of the scope of
locale management and under explicit user control.
A more grandiose way to look at it is this:
  user-programmable theory extensions.

And the application of this on the other hand is to transform outcomes
of locale instantiations (particularly locales parameterized
on multi-sorted signatures) to more natural theories.
This makes use of an isomorphic transfer principle
to map certain well-behaved closed sets (which I like to call simple
soft types) to correspondingly typedef'd HOL type constructors.


Cheers,
  Andy

___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


[isabelle-dev] Attributes and local theories

2011-11-10 Thread Andreas Schropp

Hi Isabelle people, esp. Makarius,

it has come to my attention that attributes are
called with a theory, a proof context and a local theory
in this succession.
Is this transforming the input lthy onion from inside out?

The implementation manual states that I have to treat
all of them uniformly. This means that the attribute
transformation on a lthy would not result in
an update of the target context but of the auxilliary
context instead. Is this correct and is the target
guaranteed to be transformed in the same way beforehand?

Also, does this mean we store the information resulting
from attributes 3 times, namely in theories, contexts and
local theories, potentially without any sharing?

If in the process of a local theory transformation I want
to store information to be looked up later, but not escaping
the scope of the target, where should I store it?
In the auxilliary context or the target context?


Cheers,
  Andy

___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Using PolyML's memory consumption profiling on Isabelle

2011-10-14 Thread Andreas Schropp

On 10/13/2011 01:24 PM, Thomas Sewell wrote:

Good day all. Just wanted to let the Isabelle developers know about the latest 
feature David Matthews has added to PolyML, and to let you all know how useful 
it is.

The feature allows profiling of objects after garbage collection. When code is 
compiled with PolyML.compiler.allocationProfiling set to 1, all objects 
allocated are also given a pointer to their allocating function. When the 
garbage collector runs with PolyML.compiler.profiling set to 4, a statistical 
trace is printed of which objects survived garbage collection.
   


Cool.

So we have:
  profiling = 1 approximates runtime Monte-Carlo style sampling of the 
program counter
  profiling = 2 records the number of words allocated in each function 
(very accurate IIRC)

  profiling = 3 ???
  profiling = 4 counts GC survivors (very accurately?)



This means that for Isabelle we get our first real look at what is taking up 
space at runtime and in images.

I include the last 20 lines of traces produced at four interesting points
   1) After including the HOL theories in HOL's ROOT.ML
   2) After performing a PolyML.shareCommonData after (1)
   3) After adding a 400-element record to a test theory built on the HOL image.
   4) After performing a shareCommonData after (3)
   


These are traces for profiling=4?


Isabelle is generating a *lot* of copies of types  terms, particularly via
Term.map_atyps. Since shareCommonData eliminates them, many are
duplicates. It's possible that further use of the same variants from
Term_Subst might help. It's also possible that the repeated reconstruction is
necessary (e.g. repeated generalization/abstraction of type variables) and
further use of the new Term_Sharing mechanism might be the answer.
   


The way I learned Isabelle programming one views term-traversal as being 
cheap,
which seems to be true most of the time especially when they are freshly 
allocated
with nice locality properties. Sharing lots of subterms might interfere 
with this.

Isn't this what GC was made for? Why introduce artificial sharing?

BTW: the Coq kernel does huge amounts of sharing IIRC.
Should we be concerned or is this just a thing because of proof terms?

Makarius, please comment on this, because now I feel like a wasteful
programmer. ;D



A large quantity of the persistent objects are Table and Net objects, as
expected.

There are surprisingly many dummy tasks.
   


What is a dummy task?


A surprisingly large quantity of the persistent objects are associated with
proof terms and name derivations. This is presumably not Thm.name_derivation
but inlined code from its subcalls Proofterm.thm_proof, proof_combP,
proof_combt' and Library.foldl, none of which are listed. If indeed these foldl
loops are producing this many objects then perhaps the work done unconditionally
here should be rethought.
   


For proofs=0?
Taking a guess these might be the PBody thms pointers.

Cheers,
  Andy


___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Using PolyML's memory consumption profiling on Isabelle

2011-10-14 Thread Andreas Schropp

On 10/13/2011 01:24 PM, Thomas Sewell wrote:

There are surprisingly many dummy tasks.
[...]
 918632 Task_Queue.dummy_task(1)
   


val dummy_task = Task(NONE, ~1)

Values are not shared?! What the hell?

___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] [isabelle] (Re-)introducing set as a type constructor rather than as mere abbreviation[SEC=UNCLASSIFIED]

2011-09-05 Thread Andreas Schropp

On 08/24/2011 08:30 PM, Florian Haftmann wrote:

I don't think it's possible to have it both ways. We need to either say we are 
making this change (and then pushing it through) or not making this change.
 

Having full compatible versions indeed is illusive.


Why exactly?

From a naive POV sets and predicates are isomorphic and
results about them could be transferred at any time.
Not that I advocate this on a large scale, but where does
this break down? Tools?
Is this intrinsic or rather an artifact of Isabelle or HOL?

I am asking because I still believe that one day
(somewhat distant, even given my levels of optimism ^^)
theorem proving will be modulo isomorphisms.
I.e. Voevodsky et. al.'s vision should become true not only
in the fancyful calculi they are designing.

Cheers,
  Andy

___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] typedef

2011-09-01 Thread Andreas Schropp

On 08/31/2011 04:21 PM, Andreas Schropp wrote:

On 08/30/2011 08:12 PM, Brian Huffman wrote:

However, the nonemptiness proof does rely on type class assumptions:
Indeed, {A. ~ finite A} is actually empty for any finite type 'a. 


I think I get your point now: assumptions for type class constraints
on variables are indeed local to nonemptiness-proofs if we eliminate
typeclass-reasoning and don't regard it as primitive.

Generalizing the type well-formedness arguments to depend on arbitrary
local assumptions (instead of only type class assumptions) is the treatment
of local typedefs I talked about earlier.
But I think your suggestion via a different semantic interpretation of
typedefs would result in easier code.

Cheers,
  Andy

___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation

2011-08-26 Thread Andreas Schropp

On 08/26/2011 01:08 PM, Makarius wrote:

On Thu, 25 Aug 2011, Andreas Schropp wrote:
Of course I am probably the only one that will ever care about these 
things, so whatever.


I have always cared about that, starting about 1994 in my first 
investigations what HOL actually is -- there are still various 
misunderstandings in printed texts.  And of course there is still no 
fully formal theory of all the aspects that have been emphasized in 
the variety of HOL implementations.


No offense: the last time we discussed these things
you were introducing local typedefs directly instead of expressing them
via global typedefs, which I told you is not exactly hard.
It looks like you are more concerned with an easy implementation,
leaving any matters of conservativity to me. I am not saying this is
bad, but every time my transformation gets harder, I am not
exactly feeling that Makarius always cares about conservativity. ;D



Maybe you want to start a realistic formalization yourself?



This conservativity result that I am animating is a global theory
transformation featuring replacement of certain typedefs by
their representing sets, regarding them as soft types.
If someone is interested I would indeed write this up, but
given the complexity (I guess 3-4 pages of rule systems, without text)
I don't think we can hope for a realistic formalization.

BTW: Steven should get credit for seriously trying
conservativity of overloading at least. This helped to expose
the misconception, though he did not notice the problem.

Also my write-up of the Isabelle-Kernel is 8 pages (of rule systems, 
without text!)
and still incomplete, so I don't think we will ever manage to prove 
something
important about Isabelle. I am even being generous here, excluding e.g. 
unification
which is part of the trusted code (for speed I guess) but generates 
proof terms.
In my eyes, the only reason we can claim to satisfy the deBruijn 
criterion is

that proof-checking is much easier in ZF. From an LCF POV we are not really
in game, because unification et al are trusted code.



Makarius


___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation

2011-08-26 Thread Andreas Schropp

On 08/26/2011 06:33 PM, Brian Huffman wrote:

This approach would let us avoid having to axiomatize the 'a set type.
   


Thanks for trying to help me, but as I said:
  axiomatized set is just a slight inconvenience for me, so if you go for
sets as a seperate type don't bother introducing a new primitive.


Also, for those of us who like predicates, pred_typedef might be
rather useful as a user-level command.
   


Sets and predicates are isomorphic, so if my isomorphic transfer tool works
out I don't see why this is needed.


- Brian
   


___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation

2011-08-26 Thread Andreas Schropp

On 08/26/2011 06:50 PM, Tobias Nipkow wrote:

I agree. Since predicates are primitive, starting from them is appropriate.

Tobias


Did I get this right:
  the idea is to implement our advanced typedef via a HOL4-style 
predicate typedef?

That doesn't work because HOL4-style typedefs don't feature our extensions
to typedef and they are only conservative via a global theory 
transformation.


Or do you rather suggest having a predicate-based typedef with all our 
extensions?


Cheers,
  Andy
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation

2011-08-26 Thread Andreas Schropp

On 08/26/2011 07:08 PM, Brian Huffman wrote:

What exactly are our extensions to typedef?
   


I enumerated them in the other thread:
  local typedefs, sort constraints on type parameters,
overloaded constants in representation sets.
Hopefully I am not missing others.


My understanding of the localized typedef is that it works by
declaring a global typedef behind the scenes (much like a local
definition of a polymorphic constant works by declaring a global
polymorphic definition behind the scenes).
   


Last time I looked local typedefs were primitive. Makarius?
Actually I suggested exactly this treatment of local typedefs
back in the day, so thanks for bringing this up.


What am I missing here?
   


Most importantly you are missing (indirectly) overloaded constants in
representing sets of typedefs. E.g.
  typedef 'a matrix = {f . finite (nonzero_positions f) }
depends on nonzero_positions, which depends on the
overloaded constant zero['a].
If we try to transform this to HOL4-style typedef without (indirectly) 
overloaded
constants, we have to abstract out a dictionary for zero['a], which 
would give rise to

dependent typedefs. A solution is to eliminate such typedefs completely,
replacing them by their representing sets, regarded as soft-types.
This is an extremely non-trivial global theory-transformation.


- Brian
   


___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation

2011-08-26 Thread Andreas Schropp

On 08/26/2011 07:08 PM, Brian Huffman wrote:

As far as I understand, the typedef package merely issues global
axioms of the form type_definition Rep Abs S, as long as it is
provided with a proof of EX x. x : S.
   


The global axiom is

  EX x. x : A == type_definition Rep Abs A

allowing local typedefs whenever non-emptiness of A is
derivable, even using assumptions in the context.

This is not really a problem, but complicates my conservativity
argument. Before local typedefs the proof of (EX x. x : A) was
global. Actually Makarius' attitude on this was that those non-emptiness
proofs should be global most of the time, but he didn't want to
introduce a check.

Cheers,
  Andy

___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation

2011-08-26 Thread Andreas Schropp

On 08/26/2011 10:38 PM, Makarius wrote:

What is gained from that apart from having two versions of typedef?


I am with you here.
We don't need two primitive versions of typedefs.

The only thing that might be sensible from my POV is using predicates 
instead of
sets in the primitive version (feat all your extensions) und simulating 
the one using

axiomatized sets via composing Abs,Rep with the set-predicate-bijection.
But as I said: I can deal with axiomatized set with only minor 
complications, so no

indication here.

Cheers,
  Andy


___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] typedef

2011-08-26 Thread Andreas Schropp

On 08/26/2011 11:08 PM, Brian Huffman wrote:

On Fri, Aug 26, 2011 at 1:34 PM, Andreas Schroppschr...@in.tum.de  wrote:
   

locale bla =
  assume False
  typedef empty = {}

Now I have to put up with the fact the semantic interpretation of empty is
the empty set. Formerly only non-empty sets were semantic interpretations of
type constructors. The way around this is to localize derivations related to
type constructor well-formedness, using False to forge all those crazy
things.
 

As I understand it, the typedef in the above example will cause a
conditional axiom to be generated, something like this:

axiom type_definition_empty:
   False ==  type_definition Rep_empty Abs_empty {}

As far as I can tell, this axiom doesn't force you to use the empty
set to denote type empty in your set-theoretic model. In fact, you
can use any set you want to denote type empty, because the axiom
type_definition_empty doesn't assert anything about it at all!
   


So you suggest using e.g.
  if EX x. x : A then A
  else {0}
instead of A as the semantic interpretation?
Interesting!


- Brian
   


___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation

2011-08-25 Thread Andreas Schropp

On 08/25/2011 10:45 PM, Florian Haftmann wrote:

Hi all,

thanks to many efforts (thanks particularly to Alex, Brian, Cezary,
Larry) we already have gained some insights whether and how it would be
technically possible to introduce a set type constructor.
   


Let me ask something stupid:
  why exactly do you guys axiomatize 'a set?
Sure it's admissable and all, but why not do this definitionally
via the obvious typedef?

BTW: I am pretty ambivalent about this endeavour,
but  'a set  might be beneficial when postprocessing
results of the HOL-ZF translation. So good luck!

Cheers,
  Andy

___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation

2011-08-25 Thread Andreas Schropp

On 08/25/2011 11:26 PM, Florian Haftmann wrote:

Hi Andreas,

   

Let me ask something stupid:
   why exactly do you guys axiomatize 'a set?
Sure it's admissable and all, but why not do this definitionally
via the obvious typedef?
 

the answer is rather technical: »typedef« in its current implementation
needs set syntax / set type as prerequisite.


If 'a set is introduced definitionally (as it were)
this is fine for me, but if you axiomatize 'a set my setup in HOL-ZF
gets slightly more complicated. So now I am slightly against this. ;D


Of course you could change
»typedef« to be based on predicates, but there is some kind of unspoken
agreement not to set one's hand on that ancient and time-honoured Gordon
HOL primitive.
   


Haha, very funny!
We are a long distance from the ancient HOL primitive already:
  local typedefs, sort constraints on type parameters,
overloaded constants in representation sets.
Perhaps you remember there was quite a suprise regarding those
meta-safety proofs that were once regarded as justification
of Isabelle/HOL's extensions to HOL. These days it seems my
translation is the only account of the generalized conservativity
result that still holds.

Of course I am probably the only one that will ever care about
these things, so whatever. ;D


Rgds.,
Florian

   


___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Fwd: [isabelle] Problem with frule_tac substitution

2011-02-09 Thread Andreas Schropp

On 02/09/2011 03:39 PM, Tobias Nipkow wrote:

If your analogy with lambda calculus is correct, than there are
situations when one must rename something exported from a proof block.
That I do not understand. Take this example:

lemma True
proof-
   { fix n::nat have n=n by auto }

produces ?n2 = ?n2. I do not see why it could not produce ?n = ?n.
   


AFAIK this question is mostly equivalent to the question
  Why does Variable.export use maxidx instead of reusing names and
  relying on an invariant like 'names are always fixed'?
and the answer I was given (by Makarius in May 2010 in private Email 
conversation)

to that question is something like
  tools aren't this well behaved, such an invariant does not hold in 
general.


IIRC I even had the pleasure to witness this behaviour in actual code.


Regards,
  Andy

___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Safe approach to hypothesis substitution

2010-08-24 Thread Andreas Schropp

On 08/23/2010 12:30 PM, Thomas Sewell wrote:

Hello again isabelle-dev.

I should clear up some confusion that may have been caused by my mail. I was 
trying to avoid repeating all that had been said on this issue, and it seems I 
left out some that hadn't been said as well.

This approach avoids ever deleting an equality on a Free variable during a 
'safe' reasoning step. It will substitute with that equality but keep it in 
place. This is a significant change to the intermediate state of some tactic 
scripts, thus the incompatibilities, especially on induction. It is somewhat 
surprising there aren't more such incompatibilities. There's no truly safe way 
to delete an equality on a Free variable without inspecting the whole context.
   


Naive questions:
  * why is inspecting the whole context infeasible?
  * why can't we just collect (and cache?) the Frees occuring in 
assumptions managed

by assumption.ML and never delete equalities involving them?


Since equalities are not deleted, care must be taken to avoid an infinite loop 
in which a substitution is repeatedly attempted. This is the reason for the 
uniqueness check - if x appears only in 'x = expr' then there is no more 
substitution to be done. Some subtleties were discovered experimentally:
   - 'x = y' should not be substituted if either x or y is unique, or else we enter a 
loop substituting x -  y -  x
   - 'expr = x' needs to be reoriented when it is substituted, or else hypsubst 
and simp will loop doing opposite substitutions.
   - 'x = y' is OK to substitute if y is a unique free and x is a non-unique 
Bound.

This is what is meant by a 'useful' substitution in the comment. Other 
equalities, which would cause a no-effect or cyclic substitution, are referred 
to as trivial (abbreviated triv) in this code. The language should probably be 
cleanup up to use a consistent set of names.
   


Sounds complicated, but cool.

You adapted blast_hyp_subst_tac in that way too, right?
The subtlety here is that blast tries to approximate the effect of that 
tactic

in the search and has accuracy problems even now. The more complicated
the substitution criterion, the more weirdness is induced here.

With the current situation before the patch, the accuracy problems can be
fully repaired it seems; what works best for blast is a simple criterion of
substitutability which is decidable from the hyp (and possibly 
information that

stays constant in the whole search) only.



Let me address Alex's detailed comments:

1. In the HOL sources the algebra method is used to solve two problems about 
parity essentially by accident. No Groebner bases were involved, but the 
surrouding logic, which appeals to clarify_tac and a custom simpset, happened 
to solve the goal. With this change the simplified goal had an extra quantifier 
in it. Note that with this change terminal tactics operating in 'unsafe' mode - 
fast, best, etc - will call thin_triv_tac via Context_Rules and behave 
similarly to before. It is terminal tactics which take safe steps - like 
algebra - that are most likely to break.

2. Yes, count vars is a bit strange, as it will consider (%f. f) (%x. x) to 
contain two copies of Bound -1. The point is to count loose Bounds in the hypotheses as 
returned by Logic.strip_assums_hyp, which it gets right.

Yours,
 Thomas.
   


___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Safe approach to hypothesis substitution

2010-08-24 Thread Andreas Schropp

On 08/24/2010 06:35 PM, Makarius wrote:

On Tue, 24 Aug 2010, Andreas Schropp wrote:


Naive questions:
 * why is inspecting the whole context infeasible?
 * why can't we just collect (and cache?) the Frees occuring in 
assumptions managed

   by assumption.ML and never delete equalities involving them?


Assumptions belong to the foundation layer, i.e. they are 
conceptionally somehow accidental, even hidden.


Why is the safeness of this equality-elimination not a property only 
dependent on the foundation layer?




Just like global types/consts/defs, local fixes/assumes merely 
generate an infinite collection of consequences.  The latter is what 
you are working with conceptually, but it is not practical.  So the 
system provides further slots to declare certain consequences of a 
context to certain tools: simp, intro, elim etc.


I don't get this response:
  if a Free x is not mentioned in any assumes, the elimination of a hyp 
x=t can not affect provability of the goal,
because any deductions involving x can be done with an arbitrary term of 
the same type (esp. t) instead, right?




Anyway, I would prefer if the non-local behaviour of hyp_subst could 
be repaired without too many additional complications, i.e. by using 
the visible goal that it is offered in a sensible way.


There are some further problems of hyp_subst that maybe Stefan 
Berghofer still recalls.  We have been standing there many times 
before, but never managed to improve it without too much effort, or 
breaking too many existing proof scripts.  The real trouble only 
starts when trying the main portion of existing applications, and also 
doing some performance measurements ...



Makarius
___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev 



___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Safe approach to hypothesis substitution

2010-08-24 Thread Andreas Schropp

On 08/24/2010 07:51 PM, Alexander Krauss wrote:

Andreas Schropp wrote:

On 08/24/2010 06:35 PM, Makarius wrote:
Just like global types/consts/defs, local fixes/assumes merely 
generate an infinite collection of consequences.  The latter is what 
you are working with conceptually, but it is not practical.  So the 
system provides further slots to declare certain consequences of a 
context to certain tools: simp, intro, elim etc.


I don't get this response:
  if a Free x is not mentioned in any assumes, the elimination of a 
hyp x=t can not affect provability of the goal,
because any deductions involving x can be done with an arbitrary term 
of the same type (esp. t) instead, right?


The safety concerns discussed here are not the only relevant design 
principle. If a method would peek at the assumtions in a context, you 
would get a different behaviour of


I am just talking about tracking Frees occuring in any assumptions, no
general peeking at assumptions. But I get your point.

And actually any tactic can peek at the assumptions, via examination
of the hyps of the goal-state, so this is at best an unenforcable design
consideration and nothing really changes by making this functionality
available from assumption.ML.

The more practical justification (opposed to the theoretical 
justification, which I

alluded to above) of doing this in the context of equality-elimination
might be that most of our tools are based on unification.
So they should be closed under substitutions, meaning that if they
can prove P, they can prove any well-typed substitutions of Frees and 
Vars in P?




lemma fixes x shows A x == B x
apply method

and

lemma fixes x
  assumes a: A x
  shows B x
apply (insert a)
apply method

Safe or not, this is obviously undesirable.


Well yes, but users have to be aware of the difference
between an assume (extending the context)
and an explicit goal assumption (not extending
the context) anyway. Otherwise things like

lemma x=t == t=x x=x
proof -
  assume x=t
  thus t=x by (rule sym)
  show x=x by (rule refl)
qed

should work too without bracing or reording.



Alex


___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Safe approach to hypothesis substitution

2010-08-24 Thread Andreas Schropp

On 08/24/2010 09:13 PM, Andreas Schropp wrote:

On 08/24/2010 07:51 PM, Alexander Krauss wrote:


lemma fixes x shows A x == B x
apply method

and

lemma fixes x
  assumes a: A x
  shows B x
apply (insert a)
apply method

Safe or not, this is obviously undesirable.


BTW: if you want to be even more conservative you
take both fixes and assumes into account
(are the Frees in assumes always guaranteed to be fixed?)
when approximating safeness of equality-elimination.
Then you would have to compare e.g.

lemma shows !! x. A x == B x
apply method

and

lemma fixes x
  assumes a: A x
  shows B x
apply (insert a)
apply method

to observe the difference. Is that better or worse? ^^

___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Safe approach to hypothesis substitution

2010-08-24 Thread Andreas Schropp

On 08/24/2010 10:51 PM, Makarius wrote:

On Tue, 24 Aug 2010, Andreas Schropp wrote:

And actually any tactic can peek at the assumptions, via examination 
of the hyps of the goal-state


A tactic must never peek at the hyps field of the goal state, and it 
must never peek at the main goal being proved.  See also section 3.2 
in the Isabelle/Isar implementation manual for some further 
well-formedness conditions that cannot be enforced by the ML type 
discipline.


Section 3.2 only lists the following

The main well-formedness conditions for proper tactics are
   summarized
   as follows.
· General tactic failure is indicated by an empty result, only
   serious faults
   may produce an exception.
· The main conclusion must not be changed, apart from instantiating
   schematic variables.
· A tactic operates either uniformly on all subgoals, or
   specifically on a
   selected subgoal (without bumping into unrelated subgoals).
· Range errors in subgoal addressing produce an empty result.

Can someone provide a full list?
Is everybody respecting all of them? I don't even know them ...

BTW: this email asks a lot of questions about conformity to seemingly
undocumented design principles.
Please don't mistake that for aggressiveness or something. ^^



The LCF type discipline of the kernel prevents proving wrong results, 
but it does not prevent breaking tools, or violating higher 
structuring principles. 


What are those higher structuring principles?
What does violation of them imply?

One very important one is independence on logical details of the 
derivation of previous results.


This forbids any tools using proof terms, e.g.
Extraction, AWE, unoverloading, HOL-ZF.

What is the motivation behind our proof terms if it is
considered a well-formedness violation to use them?

E.g. tools need to work uniformly for assumptions or derived facts, 
fixed parameters or locally defined entities.


This means I should not do a case distinction based on if something is 
an assumption etc?

That sounds reasonable, but why is it important?
Logically an assumption is quite different from a derived fact:
  introducing an assumption weakens your results, whereas introducing a 
derived fact

never does. Why do we try to blur the line here?

What does this say about our equality-elimination criterion, which wants 
to check if there

are any assumptions involving a Free?


For historical reasons one can also have Frees that are not fixed in 
the context, or hyps that are not assumed.


So is that a higher structuring principle?
Can you list some more with motivations behind them?




A good sanity check of some idea that involves the proof context is to 
see how it interacts either with 'have' or 'obtain' in Isar.  I.e. the 
following should be conservative additions to the proof situation:


  have P x sorry


This doesn't change the assumptions, so no interaction with
that equality-elimination safeness approximation idea?



or

  obtain P x sorry


This adds the assumption P x, so hyps x=t are not
eliminated, which may be more conservative than
we need it to be, if there is no further assumption on x
(but how realistic is that?).

What is obtain without a where clause good for? It looks
like a complicated way to write a  have P x.
The behaviour wrt
  obtain x where P x
is right:
  this adds fix x assume P x, so hyps x=t are not
eliminated, which is correct because we don't know P t
so the elimination would loose information.



These are just necessariy conditions on structural integrity wrt. the 
context, not sufficient ones.


And are there sufficient ones? Is structural integrity of contexts
important for that equality-elimination safeness approximation idea?

Where is structural integrity of contexts used and established?

You mentioned above that

  For historical reasons one can also have Frees that are not
   fixed in the context, or hyps that are not assumed.,

so this is not part of structural integrity for contexts?



___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


[isabelle-dev] profiling

2010-03-01 Thread Andreas Schropp
On 03/01/2010 09:28 PM, Brian Huffman wrote:
 Hello all,

 I am trying to use profiling to measure the performance of the HOLCF
 domain package. I recently discovered that I can enable profiling in
 Isabelle with the following command:

 ML {* Toplevel.profiling := 1; *}

 Here's the output from a new_domain command that I tried:

 ... (lots of small numbers) ...
  10 Defs.reduction(4)reduct(3)
  11 Table().fold_table(1)fold(2)
  12 Defs.match_args(2)
  12 Sorts.mg_domain(3)
  13 Type.raw_match(3)
  15 List.map(2)
  16 GARBAGE COLLECTION (update phase)
  21 GARBAGE COLLECTION (mark phase)
  39 Table().lookup(2)look(1)
  39 GARBAGE COLLECTION (total)
  40 Table().modify(3)modfy(1)

 Total: 293; Counted: 293; Uncounted: 0
 ### 1.420s elapsed time, 1.284s cpu time, 0.156s GC time

 Apparently the domain package spends a lot of time doing lookups and
 modifications of tables. But I have no idea which functions are
 calling the table library, so this information is basically useless.


Im just guessing from the call dependencies I know:
   it looks like you are doing some overloaded definition which results 
in (quite?) some normalization of the constant dependency tracking in 
defs.ML. But that doesn't explain Sorts.mg_domain - are type-class-heavy 
unifications involved?

 What other profiling tools are available for me to use?

To my knowledge: none.
But you can at least leaverage the existing ones in funny ways, like 
profiling a single function call with profile 1 f x.
And you can profile allocations by replacing 1 by 2, which is much more 
accurate then the monte-carlo-based time profiling.

 It would be
 nice to have something similar to the profiler in GHC, which outputs a
 tree-like table that shows the time spent in each function, as well as
 the total inherited time from all other functions called by it.


Yeah, I loved to have that too.
Performance debugging in Isabelle seems to be mostly guess-and-verify. 
At least thats how I am approaching it.

 - Brian
 ___
 Isabelle-dev mailing list
 Isabelle-dev at mailbroy.informatik.tu-muenchen.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev




[isabelle-dev] Typing problem in autogenerated axiom

2009-11-30 Thread Andreas Schropp
Florian Haftmann wrote:
 Perhaps this is debatable, as one can always prove nonemptyness of
 arbitrary types via hidden variables:

 definition
  metaex :: ('a::{} = prop) = prop
 where
  metaex P == (!! Q. (!! x. PROP (P x) == PROP Q) == PROP Q)

 lemma meta_nonempty : PROP metaex (% x::'a::{}. x == x)
 unfolding metaex_def
 proof -
  fix Q
  assume H: (!!x::'a. x == x == PROP Q)
  have A: y::'a::{} == y == PROP Q by(rule H)
  have B: y::'a::{} == y by(rule Pure.reflexive)
  show PROP Q by(rule A[OF B])
  qed
 

 Indeed it is the other way round: the meta theory of HOL (also the
 minimal HOL of Pure) demands non-emptiness of types.  Postulating an
 unspecified constants of an arbitrary type is always admissible.
   

Yes, but why is undefined only declared for HOL types then?
This is of course only one halve of the truth: sort constraints don't 
matter in the kernel and thus undefined can be used on any type, but not 
officially by users.
And more generally: why does axiomatization not do the strip_sorts 
normalization?

All this leads to strangeness in the HOL-ZF translation, where I have 
to introduce a fake constant in ZF to be the image of undefined on 
non-HOL-types.

   Florian

   



[isabelle-dev] Typing problem in autogenerated axiom

2009-11-30 Thread Andreas Schropp
Dominique Unruh wrote:
 Dear all,
   

It looks like you only replied to me. ^^

 thanks for the answers.

 Let me summarize how I understood things, just to make clear I did not
 misunderstand anything:

 * Any constant is treated by the kernel as having no sort annotations.
 Although this leads to constants defined on more types than intended
 by the user, this does not lead to inconsistencies because we could
 always imagine that the constant has some arbitrary value on the extra
 types. (And arbitrary values exist anyway because even Pure already
 ensures non-emptiness of all types.)

 * Some axioms that are autogenerated by definitions have no sort
 annotations. E.g., less_eq has an axiom giving its definition for all
 types of the form 'a::{}='a::{}=bool instead of only
 'a::ord='a::ord=Prop.

You mean 'a::ord='a::ord=bool, right?

 Although such an axiom asserts things about
 less_eq for types outside the typeclass ord (for which the definition
 then usually also does not make much sense), this again does not do
 any harm to consistency because we this simply means that we fix a
 certain meaningless value for less_eq on types for which we would not
 have fixed any value at all otherwise.
   

Yeah, thats the way I understand it. As I mentioned, I even invent such 
meaningless values in ZF, just to make my code more regular.

 This answers the question why there is no harm in removing sorts (at
 least from constants and definitions, not of course from class
 axioms). I am still wondering what the reasons for doing so are. At a
 first glance, at least, it would seem more natural to me if the
 constants and definitional axioms would have sort annotation. Also,
 the argument given by Florian (definitions not only *can* ignore sort
 constraints, but *must* ignore sort constraints, otherwise this
 unfolding would not be possible in all situations.) does not convince
 me, because since in a normal proof, we would never even have an
 occurrence of a constant with the wrong sorts, so there is no need
 for a definition that allows unfolding in such situations. Also, as
 mentioned by Andreas, stripping sort constraints may make it much
 harder to interpret Isabelle/HOL in other logics (be it ZF as in
 Andreas case, or my experiments of translating Isabelle/HOL to Coq).
   

Let me put this into perspective, in the light of a forthcoming kernel 
extension/modification:
  in this kernel patch, all sort constaints and type class reasoning 
steps are eliminated in proofterms on theorem boundaries, and 
strip_sorted variants of the axioms are asserted on the proofterm side. 
So what remains is a proof which sort and typeclass reasoning 
internalized into the logic.
 From my POV this is needed to translate HOL into logics not involving 
Isabelle-style type classes.

BTW: I would like to hear about your experiences translating HOL-Coq. 
How are you handling type class reasoning and overloading (I prefix an 
unoverloading step) in your HOL-Coq translation? How do you handle the 
interactions between Pure and HOL, or is this even a problem when 
translating into another type theory?

I would actually like it if even axiomatized constants would get 
topsorted declared types in consts.ML
The strangeness in the HOL-ZF translation regarding fake constants is 
just a minor annoyance, where a Pure.undefined would help.

 Does anyone know what the reason for these design decisions are?

 Best,
 Dominique.
   



[isabelle-dev] Typing problem in autogenerated axiom

2009-11-30 Thread Andreas Schropp
Andreas Schropp wrote:
 Let me put this into perspective, in the light of a forthcoming kernel 
 extension/modification:
  in this kernel patch, all sort constaints and type class reasoning 
 steps are eliminated in proofterms on theorem boundaries, and strip_sorted

Sorry, I mean unconstrained variants of axioms, e.g.
  OFCLASS(?'a::{}, type) == (?t::?'a::{})=?t
for HOL reflexivity.

 variants of the axioms are asserted on the proofterm side. So what 
 remains is a proof which sort and typeclass reasoning internalized 
 into the logic.
 From my POV this is needed to translate HOL into logics not involving 
 Isabelle-style type classes.



[isabelle-dev] Typing problem in autogenerated axiom

2009-11-29 Thread Andreas Schropp
Florian Haftmann wrote:
 Sort constraints on constants are just a pecularity of the parser (and
 can be changed internally, c.f. Sign.add_const_constraint), *not* a part
 of the logical foundation.  To understand, imagine you write down a
 nonsense term like (x, y) = 0 ignoring the sort constraint ::zero on
 0.  But this does not lead to problems since the only way to give
 meaning to this term is to instantiate it into theorems, where all
 relevant theorems involving 0 have a ::zero constraint.

 So, sort constraints are not relevant to definitions or constant
 signatures, but to theorems where they carry logical content.
   

I dont think this is (already) true for axiomatized constants.
If the HOL constant undefined :: 'a :: {HOL.type} (aka arbitrary) should 
witness only nonemptyness of HOL types,
then its sort constraint is important.


Perhaps this is debatable, as one can always prove nonemptyness of 
arbitrary types via hidden variables:

definition
  metaex :: ('a::{} = prop) = prop
where
  metaex P == (!! Q. (!! x. PROP (P x) == PROP Q) == PROP Q)

lemma meta_nonempty : PROP metaex (% x::'a::{}. x == x)
 unfolding metaex_def
 proof -
  fix Q
  assume H: (!!x::'a. x == x == PROP Q)
  have A: y::'a::{} == y == PROP Q by(rule H)
  have B: y::'a::{} == y by(rule Pure.reflexive)
  show PROP Q by(rule A[OF B])
  qed


   
 

 ___
 Isabelle-dev mailing list
 Isabelle-dev at mailbroy.informatik.tu-muenchen.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
   



[isabelle-dev] Typing problem in autogenerated axiom

2009-11-28 Thread Andreas Schropp
Dominique Unruh wrote:
 Dear all,

 while trying to export proof terms from Isabelle, I stumbled upon the
 following typing issue which might be a bug.

 In HOL.thy, an axiom class ord is defined which fixes a constant
 less_eq (syntactic sugar is =):

   
 term HOL.ord_class.less_eq;
 
 op =
   :: 'a::ord = 'a::ord = bool

 In Ordering.thy, an instantiation for the type constructor fun of this
 axiom class is given. Besides other things, this instantiation defines
 the axiom le_fun_def_raw:

   
 ML {* Thm.axiom (the_context()) Orderings.le_fun_def_raw *};
 
 val it =
ord_fun_inst.less_eq_fun ==
   %(f::?'a::{} = ?'b::{}) g::?'a::{} = ?'b::{}.
  ALL x::?'a::{}. f x = g x : Thm.thm

 Notice the f x = g x part which represents an application of
 HOL.ord_class.less_eq (you can verify this by adding | Thm.prop_of
 before *} in the above ML command). f x and g x have type ?'b::{}
 which does not have class ord. So less_eq needs to have type ?'b::{}
 = ?'b::{} = bool which does does not match 'a::ord = 'a::ord =
 bool. So, as far as I understand the type system of Isabelle, the
 axiom le_fun_def_raw is not well-typed.

 (NB: You cannot get the axiom le_fun_def_raw by writing thm
 Orderings.le_fun_def_raw in the Isabelle toplevel, this only
 retrieves the theorem with the same name. This theorem has different
 types.)

 My question is the following:

 * Is this a bug?
   
/
/No, just idiosyncraticies, see below.
BTW: approx one year ago I also stumbled upon this, perhaps we need an 
FAQ for such things.

 * If not, why is le_fun_def_raw well-typed?

The typing rules for terms in the kernel dont check for well-sortedness.
But AFAIK the parser demands well-sortedness, thus resulting in confusion.

 Where can I find a
 description of Isabelle's type system?
   

If you want a formal account on the current implementation, I only know 
about my own sketches of the Isa09 kernel from 2 months ago. These are 7 
pages pencil-on-paper with german comments, are not reviewed and assume 
some familiarity with the implementation.

The constant-typing rule might be of interest:

  thy |- tau :: ctyp tau is a raw instance of thy_consttyp(c)
===
   thy |- c_tau :: tau

meaning that if the type tau can be certified (which means that sort 
constaints on type variables are normalized and type constructors have 
the right arities) in theory thy and tau is a possibly non-wellsorted 
instance of the declared type thy_consttyp(c) of constant c in theory 
thy, which just means that there exists a type-substitution sigma not 
necessarily respecting sort-constraints of typvars with 
sigma(thy_consttyp(c))=tau, then
the constant c (as always annotated with type tau) can be given the type 
tau in theory thy.


 Best wishes,
 Dominique.


 PS: I noticed the following possibly related fact:

   
 ML {* the_context() | Sign.consts_of | (fn c = Consts.read_const c 
 HOL.ord_class.less_eq) *};
 
 val it = Const (HOL.ord_class.less_eq, ?'a::{} = ?'a::{} = bool)
 : Term.term

 Now suddenly the type of less_eq does not contain type classes, in
 contrast to what term HOL.ord_class.less_eq; outputs. I do not
 understand why.
   

If the definition-mechanism used is high-level enough (which is almost 
always the case) then constants get topsorted types and definitions 
(so they have no type class constraints) before they enter the kernel.
Parsing and pretty-printing seem to recover the constaints (which are 
registered by type class and overloading definitions, I guess).

Regards,
  Andy



[isabelle-dev] Isabelle/HOL axiom ext is redundant

2009-11-17 Thread Andreas Schropp
Brian Huffman wrote:
 I would consider this as an implementation detail, and not worry about
 it any further.

 Tools that operate on the level of proof terms can then eliminate
 eq_reflection, as was pointed out first by Stefan Berghofer some years ago,
 IIRC.  (I also learned from him that it is better to speak about Pure as
 logical framework, and not use term meta logic anymore.)

   
 Yeah, Stefan does this in HOL/Tools/rewrite_hol_proof.ML (which was tested
 quite thoroughly since 5e20f9c20086, so also in Isa09).
 

 This is very interesting... I'd like to learn more about the status of
 eq_reflection as a meta-theorem. Is there a paper about this?

   

I dont know a paper, but Stefan's thesis has a section on the 
elimination of eq_reflection and meta_eq_to_obj_eq:
  the idea for the elimination is to propagate (by using respective HOL 
theorems iffD1,iffD2, trans, refl, sym, cong) the invocation of the 
meta_eq_to_obj_eq theorem as far down as possible, so that it should (if 
this is a well-behaved HOL proof) be applied to an invocation of the 
eq_reflection axiom and both can be eliminated.

AFAIK rewrite_hol_proof also tries to normalize the proof (to make it 
more constructive, less extensional?), especially by using the real 
intro/elim/cong rules for logical connectives instead of the 
artificial rewriting proofs with iffD1,iffD2. This is also in Stefan's 
thesis I think.


Problems arise when HOL proofs arent exactly well-behaved, for example 
using directly == instead of = in the proposition, or if there are 
fragments of locale or type class reasoning in them, which often involve 
meta-and () and sometimes can't be easily internalized. Those are 
some of the reasons why I preferred to be robust against any 
interactions of Pure and HOL in the translation to ZF.

Regards,
  Andy