Re: [isabelle-dev] Let and tuple case expressions

2014-10-03 Thread Brian Huffman
Perhaps this should be done uniformly for all single-constructor
datatypes, not just pairs. Any case expression with a single branch
could be printed as a let.

- Brian

On Thu, Oct 2, 2014 at 9:13 AM, Florian Haftmann
florian.haftm...@informatik.tu-muenchen.de wrote:
 In a private discussion, there had been a question what can be done against

   let (a, b) = p in t

 being simplified by default to

   case p of (a, b) = t

 I did one attempt (see attached patch) to suppress this.  However after
 realizing that proofs now tend to become more complicated, I spent a
 second thought on this.

 In short, I have come to the conclusion that

   let (a, b) = p in t

   case p of (a, b) = t

 at the moment being logically distinct, should be one and the same.  In
 other words, I suggest that any case expression on tuples should be
 printed using »let« rather than »case«.  The constant »Let« would turn
 into a degenerate case combinator with no actual pattern match.

 This is very much the same way as the code generator translates let- and
 case expression to target languages.

 Opinions?
 Florian

 --

 PGP available:
 http://home.informatik.tu-muenchen.de/haftmann/pgp/florian_haftmann_at_informatik_tu_muenchen_de

 ___
 isabelle-dev mailing list
 isabelle-...@in.tum.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev

___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] show A == B

2014-07-11 Thread Brian Huffman
On Fri, Jul 11, 2014 at 5:54 AM, Askar Safin safinas...@mail.ru wrote:
 Hi. I think I found a bug in the Isabelle 2013-2 (OS: Debian GNU/Linux 7, 
 Interface: jEdit). For example, I type the following in the jEdit Isabelle 
 interface:

 ==begin==
 notepad begin
 have A == B and C proof -
   show A == B sorry
 ==end==

 Then, Isabelle will accept this show and the Output section of jEdit will 
 show me:
[...]
 goal (2 subgoals):
  1. A ⟹ A
  2. C
 ==end==

 So, we see strange A == A goal. Then I can continue:

Hi Askar,

This exact issue has been discussed previously (multiple times!) on
the isabelle-users list.

https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2010-November/msg00024.html

https://lists.cam.ac.uk/mailman/htdig/cl-isabelle-users/2009-April/msg00052.html
https://lists.cam.ac.uk/mailman/htdig/cl-isabelle-users/2009-September/msg00044.html

I still agree with what I said back then: This behavior of show with
meta-implication is surprising and confusing to ordinary users, and we
really should change it.

 ==begin==
   show C sorry
 qed
 ==end==

 And Isabelle will accept my proof. So, proof checking is OK, but the Output 
 shows confusing output.

When you write qed, the default behavior is to try to solve any
remaining goals by assumption, which is why your proof script still
works. (In case you didn't know, you can also use e.g. qed auto to
try to solve remaining goals with auto. This is useful for proofs
with lots of uninteresting trivial cases.)

You are fortunate that your proof script still works; as discussed in
the linked posts from 2009, proving an if-and-only-if proposition in
this style will fail due to quirks of this show behavior.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


[isabelle-dev] Isabelle_makeall not finishing on testboard

2014-03-20 Thread Brian Huffman
Hi everyone,

I've noticed that recent changesets (up to 00112abe9b25) on testboard
have completed Pure and HOL tests, but the HOL_makeall results never
show up. (The most recent HOL_makeall report is 3253aaf73a01, dated
March 18.)

Does anyone know why this is? Is one of the HOL theories going into an
infinite loop? (I currently have insufficient computing resources to
test everything myself.)

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] HOL-Decision_Procs FAILED

2014-03-18 Thread Brian Huffman
On Tue, Mar 18, 2014 at 12:55 PM, Makarius makar...@sketis.net wrote:
 Are you actually working together with Johannes Hölzl on these parts, or is
 your later change 32b7eafc5a52 merely a coincidence?

We are not working directly together, but I suppose that the same
email conversation with Larry Paulson has motivated both of us
separately to clean up the Multivariate_Analysis theories.

 Did you notice practically relevant performance issues with the
 HOL-Multivariate-Analysis sessions?  It is one of the (relatively few)
 sessions where the fact name space change (for semantic completion) has
 significant impact.  I am still reading the tea leaves in the charts at
 http://isabelle.in.tum.de/devel/.  It is not yet clear to me, why this
 session suffers more than others.

I haven't noticed anything; but I suppose I have not been compiling
the theories regularly enough to notice any performance patterns at
all. Do you see the slowdown in batch mode, interactively, or both?

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Problem with transfer method

2013-09-30 Thread Brian Huffman
On Fri, Sep 27, 2013 at 12:16 PM, Florian Haftmann
florian.haftm...@informatik.tu-muenchen.de wrote:
 context field_char_0
 begin

 …

 lemma of_rat_minus: of_rat (- a) = - of_rat a
   apply transfer

 the »- _« (name uminus) gets replaced by a bound variable »uminusa«.
 There seems to be a problem with the context here.

The problem here is actually the heuristic that transfer uses to
decide whether or not to generalize over a free variable. Currently
the rule is that all frees are generalized, except for those that
appear in a transfer rule registered in the local context.

In context field_char_0, uminus is a free variable of type 'a = 'a
which is not mentioned in any transfer rules. The transfer tactic
therefore treats it like any other free variable.

I can see two possible directions for improvement in the heuristic:

1. Always avoid generalizing free variables that are locale
parameters. The problem is that I don't know how to query the context
to find out what they are. (Suggestions, anyone?)

2. Consider the type of the free variable when deciding whether to
generalize. Generalization is not necessary if transfer will not
change the type of the variable. This solution would require careful
design, and would be more work to implement.

For now, you can just use apply (transfer fixing: uminus) as a workaround.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Problem with transfer method

2013-09-30 Thread Brian Huffman
On Mon, Sep 30, 2013 at 2:34 PM, Makarius makar...@sketis.net wrote:
 According to the normal context discipline, free variables are always fixed,
 like a local const.  We have a few tools that violate that principle and
 consequently cause confusion, e.g. the Simplifier. There are sometimes
 historical reasons for that, and little hope for reforms.

 Is this also the case for transfer?

If you're asking whether transfer distinguishes the term constructors
Free and Const, then the answer is yes. Transfer may generalize a Free
(according to the heuristic) but will never generalize over a Const.

With more and more localized tools using Frees as constants, perhaps
it would be more robust for transfer to treat Free and Const exactly
the same. I will have to think more about this.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Obsolete numeral experiments?

2013-05-28 Thread Brian Huffman
On Tue, May 28, 2013 at 6:53 AM, Makarius makar...@sketis.net wrote:
 What is the status of the following experiments wrt. the regular numerals in
 main HOL?

 http://isabelle.in.tum.de/repos/isabelle/file/6324f30e23b6/src/HOL/ex/Binary.thy
 http://isabelle.in.tum.de/repos/isabelle/file/6324f30e23b6/src/HOL/ex/Numeral_Representation.thy

 Can we delete that, and keep the history inside the history? Or are there
 remaining aspects that are not in the official numeral implementation (and
 reform) by Brian Huffman?

Numeral_Representation.thy defines a couple of type classes for
subtraction that were never added to the main libraries:
semiring_minus and semiring_1_minus. (I believe these were Florian's
work.) They would let us generalize some rules that are currently
specific to nat. We should discuss whether these (or some variation)
would be appropriate to add to Groups.thy before we delete
Numeral_Representation.thy.

As for Binary.thy, I believe that all the main ideas there have
already been incorporated into the HOL numerals library, so there's no
reason not to delete it.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] NEWS: Case translations as a separate check phase independent of the datatype package

2013-04-25 Thread Brian Huffman
I discovered a problem with the pretty-printing of some terms (I am
using revision 4392eb046a97).

term (λ(a, b) (c, d). e) x y

case_guard True x
  (case_cons (case_abs (λa. case_abs (λb. case_elem (a, b) (λ(c, d). e
case_nil)
  y
  :: 'e

I assume this is a result of the recent changes to the handling of
case expressions?

- Brian


On Wed, Apr 10, 2013 at 9:16 AM, Dmitriy Traytel tray...@in.tum.de wrote:
 * Nested case expressions are now translated in a separate check
   phase rather than during parsing. The data for case combinators
   is separated from the datatype package. The declaration attribute
   case_tr can be used to register new case combinators:

   declare [[case_translation case_combinator constructor1 ... constructorN]]

 This refers to Isabelle/bdaa1582dc8b

 Dmitriy
 ___
 isabelle-dev mailing list
 isabelle-...@in.tum.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] NEWS: Case translations as a separate check phase independent of the datatype package

2013-04-24 Thread Brian Huffman
On Wed, Apr 24, 2013 at 2:52 AM, Dmitriy Traytel tray...@in.tum.de wrote:
 Hi Brian

 thanks for the report. Isabelle/cf039b3c42a7 resolves this in the sense that
 internal constants (like case_guard) are not visible anymore.

Thanks for taking care of this so quickly!

 However, your example is still not printed as expected (assuming that the
 output should be equal to the input in this case):

 (case x of (a, b) = λ(c, d). e) y

Usually prod_case is printed as a tuple-lambda when partially
applied, and as a case expression when fully applied. So considering
that the underlying term in my example is prod_case (λa b. prod_case
(λc d. e)) x y, the above output makes perfect sense.

 I think, the proper resolution to this is to have an uncheck phase for
 turning prod_case e intro λ(x, y). e x y before the case translation
 uncheck phase. Maybe Makarius or Stefan could comment on this. Then, I could
 have a look.

 Dmitriy

Although previous Isabelle versions had output equal to the input in
this case, it seems a bit strange for (λ(a, b) (c, d). e) to be
printed as a lambda if applied to 0 or 2 arguments, and as a case if
applied to 1. I'm not sure it's worth complicating the pretty printer
just to get this behavior back.

- Brian

 On 24.04.2013 02:10, Brian Huffman wrote:

 I discovered a problem with the pretty-printing of some terms (I am
 using revision 4392eb046a97).

 term (λ(a, b) (c, d). e) x y

 case_guard True x
(case_cons (case_abs (λa. case_abs (λb. case_elem (a, b) (λ(c, d).
 e
  case_nil)
y
:: 'e

 I assume this is a result of the recent changes to the handling of
 case expressions?

 - Brian


 On Wed, Apr 10, 2013 at 9:16 AM, Dmitriy Traytel tray...@in.tum.de
 wrote:

 * Nested case expressions are now translated in a separate check
phase rather than during parsing. The data for case combinators
is separated from the datatype package. The declaration attribute
case_tr can be used to register new case combinators:

declare [[case_translation case_combinator constructor1 ...
 constructorN]]

 This refers to Isabelle/bdaa1582dc8b

 Dmitriy
 ___
 isabelle-dev mailing list
 isabelle-...@in.tum.de

 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] NEWS: Case translations as a separate check phase independent of the datatype package

2013-04-15 Thread Brian Huffman
It would be nice to get HOLCF case combinators working with this new
system, as the parsing for HOLCF case expressions has been a bit
broken for some time.

Can case combinators and constructor functions with
continuous-function types be made to work with the case_tr (or
case_translation?) attribute? Even if we need a separate
HOLCF-specific attribute, might it be possible to get HOLCF case
expressions to work with the new translation check phase?

- Brian

On Wed, Apr 10, 2013 at 9:16 AM, Dmitriy Traytel tray...@in.tum.de wrote:
 * Nested case expressions are now translated in a separate check
   phase rather than during parsing. The data for case combinators
   is separated from the datatype package. The declaration attribute
   case_tr can be used to register new case combinators:

   declare [[case_translation case_combinator constructor1 ... constructorN]]

 This refers to Isabelle/bdaa1582dc8b

 Dmitriy
 ___
 isabelle-dev mailing list
 isabelle-...@in.tum.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Code export to Haskell and lower-case theory names

2013-03-29 Thread Brian Huffman
On Fri, Mar 29, 2013 at 10:40 AM, Joachim Breitner breit...@kit.edu wrote:
 Hi,

 Am Freitag, den 29.03.2013, 13:24 +0100 schrieb Makarius:
  This has got nothing to do with Isabelle's informal conventions but is
  all about the target language's formal rules.

 The conventions about theory names and locale/class names are not that
 informal.  If you violate them systematically, certain name space policies
 will break down.  That is then a general user error.

 Are there really things that break if I deviate from the convention?

The main problem I know about is that qualified names can become
ambiguous: e.g. if foo.bar.baz and bar.boo.baz are both in scope,
then bar.baz could refer to either of them. The problem is avoided
if theory names and locale/type/constant names are kept disjoint. See
also this old thread:

https://mailmanbroy.informatik.tu-muenchen.de/pipermail/isabelle-dev/2011-July/001674.html

 Then it should be a hard rule enforced by the system. If that is not the
 case, then it should be fully supported and up to the user to choose his
 naming, even if it deviate from what others use (she might have reasons
 for that).

I completely agree.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Problem with simproc enat_eq_cancel

2013-03-06 Thread Brian Huffman
On Wed, Mar 6, 2013 at 7:18 AM, Makarius makar...@sketis.net wrote:
 On Fri, 1 Mar 2013, Andreas Lochbihler wrote:
 *** Proof failed.
 *** (a + b - 1 = a - 1 + b) = (a + (b + - 1) = a + (- 1 + b))
 ***  1. (a + b - 1 = b + (a - 1)) = (a + (b + - 1) = a + (b + - 1))
 *** The error(s) above occurred for the goal statement:
 *** (a + b - 1 = a - 1 + b) = (a + (b + - 1) = a + (- 1 + b))
 *** At command apply

 The simp trace with simp_debug enabled ends as follows:

 [1]Trying procedure Extended_Nat.enat_eq_cancel on:
 a + b - 1 = a - 1 + b

 simp_trace_depth_limit exceeded!


 These simprocs were introduced by Brian Huffman here:

 changeset:   45775:6c340de26a0d
 user:huffman
 date:Wed Dec 07 10:50:30 2011 +0100
 files:   src/HOL/Library/Extended_Nat.thy
 description:
 add cancellation simprocs for type enat


 We should Brian give time to comment on it (there is no need for real-time
 reactivity on isabelle-dev).

I just pushed a changeset to testboard that should fix the problem
(abdcf1a7cabf). The issue was that the enat simprocs used the library
function Arith_Data.dest_sum : term - term list to split sums; it
treats subtraction x - y as equivalent to x + - y, which is not
valid for enat.

 He did all these renovations of the old
 simprocs, and this one seems to be derived from the same tradition.

Actually, the enat cancellation simprocs are not done in my new
conversion-based style; these are just new instances of Larry's old
extract-common-term simproc functor
(Provers/Arith/extract_common_term.ML).

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] typedef (open) legacy

2012-10-08 Thread Brian Huffman
On Fri, Oct 5, 2012 at 10:37 AM, Makarius makar...@sketis.net wrote:
 On Thu, 4 Oct 2012, Brian Huffman wrote:

 I was reluctant to support closed type definitions at all in HOLCF's
 cpodef and friends, preferring to make (open) the default behavior; but in
 the end I decided it was more important to make the input syntax consistent
 with typedef. I would be more than happy to remove this feature from the
 HOLCF packages if typedef will be changed to match.

 Do you want to make the first move to remove the (open) option from the
 HOLCF cpodef packages?  I will later follow trimming down HOL typedef --
 there are a few more fine points on my list to iron out there once the set
 defs are gone. Alternatively, I can take do it for 'typdef' packages
 simultaneously.

Hi Makarius,

I have a changeset that removes the set-definition features from the
{cpo,pcpo,domain}def commands in HOLCF, and it checks successfully on
testboard.

http://isabelle.in.tum.de/testboard/Isabelle/rev/a093798fa71b

Should I go ahead and push this changeset to the current tip?

- Brian


 Before doing anything on the packages, we should make another round through
 the visible universe of theories to eliminate the last uses of the legacy
 mode.


 Makarius
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] typedef (open) legacy

2012-10-04 Thread Brian Huffman
On Thu, Oct 4, 2012 at 2:17 PM, Makarius makar...@sketis.net wrote:
 On Thu, 4 Oct 2012, Christian Urban wrote:

 Closed type definitions with

  typedef new_type = some set

 are considered legacy. The warning message suggests
 to use

  typedef (open) new_type = some set

 but as far as I can see, this does not introduce a definition for the set
 underlying the type. For example the theorem

  new_type_def

 is not generated with open.

Defining a set constant does not make sense for all typedefs. For
example, type 'a word in HOL-Word is defined something like this:

typedef (open) 'a word = {(0::int) .. 2^card (UNIV :: 'a set)}

It is not possible to define a single constant word :: int set that
is equal to the given right-hand side for any 'a.

Similar typedefs are also found in the HOLCF libraries.

Here are the options we have for the typedef implementation:
1) typedef fails with an internal error on such definitions in its
default (closed) mode
2) typedef includes special code to generate set definitions with
extra type parameters, e.g. word :: 'a itself = int set
3) typedef uses (open) as the default mode, with closed definitions as an option
4) typedef supports only open set definitions

Option 1 is how typedef worked until a few years ago; obviously the
error message was undesirable. Option 2 is how it works now, but the
special code complicates the package and the special behavior is a bit
surprising when it happens.

I think either option 3 or 4 would make sense, although I'd say 4 is
preferable for a couple of reasons: First, it makes the implementation
of typedef simpler. Second, it actually gives users more flexibility
because if they want a set constant, they can use any definition
package, not just definition. The extra verbosity in some cases is a
small price to pay.

[...]
 So today, the form with the extra definition is mostly irrelevant, but we
 were a bit slow to remove the last remains of it from the typedef packages
 (and some add-ons of it in HOLCF).  It might be time now to do the purge
 before the next release ...

I was reluctant to support closed type definitions at all in HOLCF's
cpodef and friends, preferring to make (open) the default behavior;
but in the end I decided it was more important to make the input
syntax consistent with typedef. I would be more than happy to remove
this feature from the HOLCF packages if typedef will be changed to
match.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] NEWS: Isabelle sessions and build management

2012-07-29 Thread Brian Huffman
On Sun, Jul 29, 2012 at 7:11 PM, Makarius makar...@sketis.net wrote:
 The question is if HOL-Plain and HOL-Main still have any purpose.  Full HOL
 now takes  2min on a reasonably fast machine with 4 cores.

Images like HOL-Plain or HOL-Main are often useful when I am doing
development on libraries within the HOL image. Building these images
saves me a lot of time, because otherwise I would have to load the
same set of files repeatedly in Proof General, which can take several
minutes on my (old, not-reasonably-fast, 2-core) machine.

Of course, if we can make it easy enough to build custom images, then
there is no practical reason to have HOL-Plain or HOL-Main set up by
default in the distribution.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] HOLCF lists

2012-07-19 Thread Brian Huffman
On Thu, Jul 19, 2012 at 8:32 AM, Christian Sternagel
c-ste...@jaist.ac.jp wrote:
 I created a repository at

 http://sourceforge.net/p/holcf-prelude/

Thanks for setting this up!

 PPS: As I mentioned in an earlier mail. I would like to add a constant set
 :: 'a Data_List.list = 'a set for specification purposes. I don't see how
 this is possible as inductive_set. Any hints?

I defined set in terms of a binary list membership predicate that I
defined with inductive. It seems that inductive_set is unable to
directly define a constant of type 'a = 'b set unless the parameter
of type 'a is fixed.

 The reason why I think I need this function is that I want to prove

   filter$P$(filter$Q$xs) = filter$Q$(filter$P$xs)

 which is not true unconditionally due to strictness issues. I thought that
 using set I could formulate appropriate assumptions, such that the above
 equation holds, e.g., ALL x : set xs. P x andalso Q x = Q x andalso P x.

I was able to prove a congruence lemma for filter in terms of set,
using fixed-point induction:

lemma filter_cong: ALL x : set xs. p$x = q$x == filter$p$xs = filter$q$xs

This should be sufficient to prove plenty of other lemmas including
the one you mention.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] HOLCF lists

2012-07-18 Thread Brian Huffman
On Wed, Jul 18, 2012 at 8:52 AM, Christian Sternagel
c-ste...@jaist.ac.jp wrote:
 On 07/17/2012 07:53 PM, Brian Huffman wrote:
 HOLCF/Library/{Nat,Int}_Discrete.thy define cpo instances for types
 nat and int, so you can write types like nat - 'a list or
 nat\^sub\bottom - 'a list. Decisions about which types to use
 should probably follow Haskell: Integer should be modeled as
 int\^sub\bottom or int lift, and for unboxed Haskell types
 (e.g. Int#) you can use int.

 Sorry for asking stupid questions (since I am to lazy to look it up myself):
 what's the difference between 'a lift and 'a u exactly? Is either of
 them preferable to model Haskell-Integers? I tried to instantiate 'a u
 for my eq-class, but failed due to sort problems, since I'm not that
 familiar with the type class hierarchy of HOLCF, maybe you could save me
 some time ;).

Type 'a u or 'a\^sub\bottom requires its argument to have
class cpo. The ordering on 'a u is the same as 'a, but with a new
bottom element.

Type 'a lift only requires its argument to have class type; the
ordering is always a flat pcpo. It is intended for use with HOL types
without cpo instances.

If 'a is a cpo with a discrete ordering, then 'a u and 'a lift are
isomorphic. However, I consider it a bad practice to use lift with
any type that actually has a cpo instance.

For modeling Haskell integers, I would suggest using nat lift, int
lift, etc. if you are not using the {Nat,Int}_Discrete libraries, but
if you are, I would prefer using nat u or int u.

 Btw: any more thoughts on actually putting code into some publicly available
 destination (bitbucket, sourceforge, ...)?

If you want to set up a bitbucket or sourceforge repo for this
purpose, that would be great. I'd be happy to contribute to something
like that.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] HOLCF lists

2012-07-17 Thread Brian Huffman
On Tue, Jul 17, 2012 at 11:04 AM, Christian Sternagel
c-ste...@jaist.ac.jp wrote:
 Dear all,

 I put together some functions on lists (that I use in some small proof) in
 Data_List.thy. While doing so and thinking about functions and proofs I
 would need in the future, I stumbled across the following points:

 1) how to name HOLCF functions for which similar functions already exist in
 HOL. As you can see from my example thy-file, I tried to be as close to
 Haskell as possible and therefore hid existing HOL-names. My thinking was
 that when having a HOLCF type of lazy lists one would never want to use the
 HOL datatype list again. Maybe this was oversimplified. What do you think?

I think it is a good idea to use the Haskell names for the HOLCF list functions.

As for name-hiding, ideally this is something that users, not library
writers, should decide. Eventually I hope to see a feature like
import qualified for Isabelle. I wouldn't spend too much time
worrying about this for now.

 2) I think at some point something like set :: 'a list = 'a set for HOLCF
 lists would be useful...

What for? Is it meant to be an abstraction of some kind of executable
function, or is it only useful for writing logical specifications?

 however, I did not manage to define it, since I was
 not able to prove continuity for the following specification

 set $ Nil = {} |
 set $ (Cons$x$xs) = insert$x$(set$xs)

 Maybe such a function is not a good idea at all in the HOLCF setting and we
 should find something different.

For set to have a continuous function type, you must be using a cpo
instance for 'a set, but which one?

If you want set to correspond to an executable type of thing, then
probably some kind of powerdomain would serve better as the result
type. But if you only want to use set for writing specifications,
then it probably shouldn't be defined as a continuous function; an
inductive definition would make more sense.

 3) also if properties only hold for defined instances of lists (there are
 some differences in how defined they have to be... only xs not bottom, or
 additional no element of xs is bottom, ...), I am currently missing a
 convenient way of expressing this... (first I was thinking about set from
 above... but then again, if set is a continuous function also set$xs can
 be undefined... so maybe there is a nicer way?)

My suggestion: If a property is executable, then define it as a
continuous function; if it is not executable, then define it as an
ordinary HOL predicate.

 4) How to nicely integrate natural numbers? I don't really want to mix =
 and -, e.g., the list-function take should have type nat lift - 'a
 list - 'a list (or something similar), rather than nat = 'a list - 'a
 list I guess.

HOLCF/Library/{Nat,Int}_Discrete.thy define cpo instances for types
nat and int, so you can write types like nat - 'a list or
nat\^sub\bottom - 'a list. Decisions about which types to use
should probably follow Haskell: Integer should be modeled as
int\^sub\bottom or int lift, and for unboxed Haskell types
(e.g. Int#) you can use int.

- Brian



 I am sure some of you have already considered the above points ;), so please
 let me know your conclusions.

 cheers

 chris

 ___
 isabelle-dev mailing list
 isabelle-...@in.tum.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev

___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Nominal and FinFuns from the AFP

2012-05-31 Thread Brian Huffman
On Thu, May 31, 2012 at 1:36 PM, Makarius makar...@sketis.net wrote:
 On Thu, 31 May 2012, Andreas Lochbihler wrote:

 At some point, when I have bundles ready to work with the existing
 notation commands, we can fine-tune this scheme further, to allow users
 to
 'include' syntax in local situations.

 I tried to implement the new syntax for FinFuns with bundles and Brian's
 notation attribute
 (https://mailmanbroy.informatik.tu-muenchen.de/pipermail/isabelle-dev/2012-April/002585.html),
 but it was not satisfactory.


 I did not have time yet to look more closely at that.

 Note that notation is based on syntax declarations of the local theory
 infrastructure, which is slightly different from what you have in
 attributes.  So notation as attributes is a bad idea.

Makarius:

If you want to call one of my ideas a bad idea, then I hope you have
a better justification for this statement.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] jedit

2012-05-15 Thread Brian Huffman
On Tue, May 15, 2012 at 8:18 AM, Tobias Nipkow nip...@in.tum.de wrote:
 How can I see the possible cases in an induction, i.e. Show me cases in PG?

You can type the command print_cases into your theory file (this
also works in PG).

But then the real question is, how do we expect new users to discover
this feature?

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] print modes

2012-05-01 Thread Brian Huffman
On Wed, May 2, 2012 at 6:55 AM, Christian Sternagel
c-ste...@jaist.ac.jp wrote:
 is it really the case that currently the only way to obtain ASCII output
 using print modes is by specifying the empty string, like

 thm () conjE

 or did I miss anything? Since this print mode is occasionally useful, I
 suggest to provide a named variant, like 'plain', 'ASCII', or whatever.

+1

I would actually go a bit further, and get rid of xsymbol as a
special syntax mode.

It bothers me that if I want to define a constant with both ascii and
symbol notation, I have to use the ascii variant in the actual
definition, and then add the (far more commonly-used) symbol notation
later.

definition foo :: 'a = 'a = bool (infix ~~ 50) where ...
notation (xsymbol) foo (infix \approx 50)

I would rather write:

definition foo :: 'a = 'a = bool (infix \approx 50) where ...
notation (ascii) foo (infix ~~ 50)

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] AFP/JinjaThreads failure

2012-04-19 Thread Brian Huffman
On Thu, Apr 19, 2012 at 4:02 PM, Makarius makar...@sketis.net wrote:
 On Tue, 10 Apr 2012, Brian Huffman wrote:

 On Tue, Apr 10, 2012 at 3:06 PM, Makarius makar...@sketis.net wrote:

 Isabelle/a380515ed7e4 and AFP/53124641c94b produce the following error:

 *** No code equations for one_word_inst.one_word
 *** At command by (line 174 of
 afp-devel/thys/JinjaThreads/Common/BinOp.thy)

 What needs to be done here?


 This is probably related to my changeset Isabelle/9475d524bafb, where
 I redefined a bunch of word operations with lift_definition instead of
 definition.

Hopefully changeset e3c699a1fae6 will take care of the problem.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] NEWS: numeral representation

2012-03-29 Thread Brian Huffman
On Thu, Mar 29, 2012 at 7:50 AM, Florian Haftmann
florian.haftm...@informatik.tu-muenchen.de wrote:
 As it is now, theory »Code_Nat« is
 not announced that prominently, but this is not that critical.

We should at least put an announcement in NEWS about Code_Nat.

However, you have talked about making the binary representation for
nat the default in HOL-Main, i.e. merging Code_Nat into the Nat/Num
theories. Are you still interested in doing this?

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] NEWS: numeral representation

2012-03-28 Thread Brian Huffman
On Wed, Mar 28, 2012 at 9:15 PM, Florian Haftmann
florian.haftm...@informatik.tu-muenchen.de wrote:
 Btw. for the moment you have not transferred by additional changes
 concerning Efficient_Nat etc.  What are your plans for these?  To wait
 until the transition proper has fortified a little?  Or shall I take
 care for these?

Hi Florian,

You had replaced Efficient_Nat.thy with a similar theory file named
Code_Numeral_Nat.thy. I chose to rename this back to Efficient_Nat
before doing the big merge, because it meant touching about a dozen
fewer files, and avoiding breaking lots of AFP entries. I also
reverted the updates that you put in NEWS and the other documentation
related to the rename:

http://www4.in.tum.de/~huffman/cgi-bin/repos.cgi/numerals/rev/350dd336be43
http://www4.in.tum.de/~huffman/cgi-bin/repos.cgi/numerals/rev/df49a9297ced
http://www4.in.tum.de/~huffman/cgi-bin/repos.cgi/numerals/rev/afe641c9b5d8

If you still want to use the name Code_Numeral_Nat, go ahead and put
these changes back in.

Besides the Efficient_Nat/Code_Numeral_Nat naming, is there anything
else that you are still missing?

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] NEWS: numeral representation

2012-03-26 Thread Brian Huffman
On Mon, Mar 26, 2012 at 7:23 PM, Makarius makar...@sketis.net wrote:
 On Mon, 26 Mar 2012, Lukas Bulwahn wrote:

 This problem is reproducible on our testboard servers. At the moment, all
 tests of changesets after 2a1953f0d20d have to be manually aborted because
 of that reason. I hope you find a solution quickly, otherwise we should
 deactivate the Proofs-Lambda theory to keep a stable testing environment.


 OK, I have disabled HOL-Proofs-Lambda for the moment (cf. 500a5d97511a).

This is now fixed in a3a64240cd98, and I have re-enabled HOL-Proofs-Lambda.

The problem was the code equation for function nat configured in
Library/Code_Integer.thy which, in conjunction with Int.nat_numeral
[code_abbrev], produced code that looped indefinitely.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


[isabelle-dev] NEWS: numeral representation

2012-03-25 Thread Brian Huffman
As of 2a1953f0d20d, Isabelle now has a new representation for binary numerals:

*** HOL ***

* The representation of numerals has changed. We now have a datatype
num representing strictly positive binary numerals, along with
functions numeral :: num = 'a and neg_numeral :: num = 'a to
represent positive and negated numeric literals, respectively. (See
definitions in Num.thy.) Potential INCOMPATIBILITY; some user theories
may require adaptations:

  - Theorems with number_ring or number_semiring constraints: These
classes are gone; use comm_ring_1 or comm_semiring_1 instead.

  - Theories defining numeric types: Remove number, number_semiring,
and number_ring instances. Defer all theorems about numerals until
after classes one and semigroup_add have been instantiated.

  - Numeral-only simp rules: Replace each rule having a number_of v
pattern with two copies, one for numeral and one for neg_numeral.

  - Theorems about subclasses of semiring_1 or ring_1: These classes
automatically support numerals now, so more simp rules and
simprocs may now apply within the proof.

  - Definitions and theorems using old constructors Pls/Min/Bit0/Bit1:
Redefine using other integer operations.


There are still a couple of loose ends to take care of, which are
currently marked with BROKEN in the sources:

* Nitpick_Examples/Integer_Nits.thy: A call to nitpick on a goal with
negative numerals doesn't work. I expect the problem originates in
Tools/Nitpick/nitpick_hol.ML, which I have adapted to handle positive
numerals only; it needs a neg_numeral case.

* SMT_Examples/SMT_{Examples,Tests}.thy: Some smt proofs don't work,
because the .certs files need to be updated again.

* Library/Float.thy: There are some simp rules for a bit-length
operation expressed in terms of the old binary constructors.

I also tested most of the AFP entries, and RSAPSS/Word.thy seems to be
the only theory that breaks: It contains definitions that use the old
binary constructors.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


[isabelle-dev] canonical left-fold combinator for lists

2012-01-12 Thread Brian Huffman
Hello all,

I am writing about the following recent changeset, which adds another
left-fold operator to HOL/List.thy.

author  haftmann
Fri Jan 06 10:19:49 2012 +0100 (6 days ago)
changeset 46133 d9fe85d3d2cd
parent 461325a29dbf4c155
child 46134 4b9d4659228a
incorporated canonical fold combinator on lists into body of List
   theory; refactored passages on List.fold(l/r)

http://isabelle.in.tum.de/repos/isabelle/rev/d9fe85d3d2cd

I would like to take issue with a couple of things here. First, I
don't understand why it should be called canonical. What mainstream
programming language has a left-fold operation with this particular
type signature? The only place I have ever seen this variety of fold
is in src/Pure/General/basics.ML in the Isabelle distribution.

Second, why is the right-fold now defined in terms of a left-fold? I
think it is generally recognized that inductive proofs about
right-folds are easier than proofs about left-folds. If the goal of
defining folds in terms of each other is to simplify proofs, then I'm
sure that it would work better to define everything in terms of foldr.

Finally, do we really need *two* left-fold combinators in our standard
list library? To maintain a complete set of rewrite rules for the list
library, we need approximately one lemma for every combination of two
library functions - i.e., the number of required theorems is
approximately quadratic in the number of constants. Adding a new fold
constant means we must either add a large number of new theorems about
it, or else we must deal with having some missing theorems. Neither of
these choices sounds good to me.

Here is what I suggest: If there is a concensus that the argument
order of List.fold is more useful than the existing foldl, then we
should simply *replace* foldl with the new fold. Otherwise we should
get rid of the new List.fold.

For users who want to use programming idioms that require different
argument orders, I suggest providing a flip function (perhaps in
Fun.thy) as Haskell does:

definition flip :: ('a = 'b = 'c) = 'b = 'a = 'c
  where flip f x y = f y x

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


[isabelle-dev] proposal for new numeral representation

2011-11-24 Thread Brian Huffman
Hi all,

I have been working on a new numeral representation for Isabelle
recently, and I would like to share it with everyone. An overview of
the design is now available on the Isanotes wiki [1]; a patched
version of the Isabelle hg repo is also available [2].

[1] https://isabelle.in.tum.de/isanotes/index.php/Numerals
[2] http://www4.in.tum.de/~huffman/cgi-bin/repos.cgi/numerals

The design is based on the one in HOL/ex/Numeral.thy by Florian
Haftmann. The main definitions are as follows:

datatype bin = One | Bit0 bin | Bit1 bin

class numeral = semigroup_add + one

primrec (in numeral) numeral :: bin = 'a where
  numeral One = 1 |
  numeral (Bit0 k) = numeral k + numeral k |
  numeral (Bit1 k) = numeral k + numeral k + 1

class neg_numeral = group_add + one

definition (in neg_numeral) neg_numeral :: bin = 'a where
  neg_numeral k = - numeral k

Some of the advantages of the design:
* No more number_ring class; numerals are available on any ring type.
* Nonsense like 10^-3 is not allowed; using negative numerals on
type nat causes a type error.
* Arithmetic simp rules on type nat are *much* simpler.
* Arithmetic rules are easier to configure for other semirings, like
extended nat.
* It is no longer ever necessary to use 1+1 in place of 2.

Adapting theories to work with the new numerals is usually not a
problem: Of all the AFP entries I tested, all but three worked without
modification. Two required small one-line changes; only RSAPSS (which
has several theorems that explicitly depend on the numeral
representation) will need deeper changes.

I would welcome any suggestions, criticism, or other feedback.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] NEWS: attributes

2011-11-22 Thread Brian Huffman
On Sun, Nov 20, 2011 at 9:52 PM, Makarius makar...@sketis.net wrote:
 * Commands 'lemmas' and 'theorems' allow local variables using 'for'
 declaration, and results are standardized before being stored.  Thus
 old-style standard after instantiation or composition of facts
 becomes obsolete.  Minor INCOMPATIBILITY, due to potential change of
 indices of schematic variables.

 This refers to Isabelle/13b101cee425.

The standard attribute will also generalize free type variables, for example:

lemmas bar = foo [where 'a='b list, standard]

How can the same be accomplished with for? It seems not to support
type variables.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] exception TERM raised (line 141 of Syntax/syntax_trans.ML): abs_tr

2011-11-04 Thread Brian Huffman
On Fri, Nov 4, 2011 at 8:32 PM, Makarius makar...@sketis.net wrote:
 This syntax is only defined for single abstractions, but the categories got
 mixed up (the nonterminals via syntax types):

 syntax
  _lebesgue_integral :: 'a = ('a = real) = ('a, 'b)
 measure_space_scheme = real
    (\integral _. _ \partial_ [60,61] 110)
[..]
 The following should work better (cf. 5c760e1692b3):

 syntax
  _lebesgue_integral :: pttrn = real = ('a, 'b) measure_space_scheme =
 real
    (\integral _. _ \partial_ [60,61] 110)

 I.e. the slots for the concrete syntax are specified with the grammer in
 mind, not the resulting term after translation.  This can be also checked
 via 'print_syntax'.

Better yet, the syntax command probably ought to be used only with
types containing nothing but nonterminals:

syntax _lebesgue_integral :: pttrn = logic = logic = logic
(\integral _. _ \partial_ [60,61] 110)

Using any other kind of type expression with a syntax command is
essentially a comment; such types are only used for generating
nonterminals (type variables go to any, type prop goes to prop,
other logical types go to logic) and are not used for typechecking.
Even worse, this style makes it *look* like the types are checked,
which can be deceptive and confusing.

I would actually be in favor of changing the syntax command to
disallow logical types, requiring only nonterminal types to be used.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


[isabelle-dev] Why do cancellation simprocs call Lin_Arith.simproc

2011-10-29 Thread Brian Huffman
Hello all,

Lately I have been trying to clean up the code of the cancellation
simprocs, and I have come across something I don't understand.

One set of simprocs will cancel factors from inequalities, rewriting
terms like x * z  y * z to either x  y or y  x, depending on
whether it can prove 0  z or z  0. What I don't understand is
the method they use to try to prove 0  z or z  0: Instead of
recursively calling the full simplifier (as the simplifier would do
when applying a conditional rewrite rule) it just calls the linear
arithmetic simproc directly. (The code for this is in the function
sign_conv in HOL/Tools/numeral_simprocs.ML, introduced in rev.
57753e0ec1d4.)

This necessitates some awkward/redundant setup of simp rules: For
example, at type real, we have the cancellation simprocs, which match
goals like x * z  y * z or z * x  z * y. We also have some
cancellation simp rules like 0  z == (z * x  z * y) = (x  y). It
seems like the simprocs should make the simp rule redundant, but they
don't: Terms like real (fact n) * x  real (fact n) * y can be
rewritten by the simp rule, but the simproc fails because it cannot
solve the side-condition 0  real (fact n).

Is there any reason why the cancellation simprocs *shouldn't* call the
full simplifier recursively?

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


[isabelle-dev] Reviving an old thread: [isabelle] structured induction: mutual definitions and arbitrary in inductions?

2011-10-12 Thread Brian Huffman
Hello all,

This is a follow-up to the conversation on the isabelle-users list
from a few months ago, about confusion that arises when using mutual
induction with the arbitrary option.

https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2011-June/msg1.html

At the time I suggested that their ought to be a warning message when
you specify an arbitrary variable, if that variable does not
actually appear in the goal. Today I finally tried implementing this
warning message; it turns out that it requires only a simple
modification to function fix_tac in src/Tools/induct.ML.

After implementing the warning, I dug through the revision history and
was surprised to find that my new feature actually used to exist! It
was removed in January 2006:

http://isabelle.in.tum.de/repos/isabelle/rev/ca56111fe69c

I don't understand why the warning message was ever removed. The
commit message fix_tac: no warning is unhelpful.

Is there any good reason why we shouldn't put the warning back in?

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Reviving an old thread: [isabelle] structured induction: mutual definitions and arbitrary in inductions?

2011-10-12 Thread Brian Huffman
On Wed, Oct 12, 2011 at 4:01 PM, Makarius makar...@sketis.net wrote:
 On Wed, 12 Oct 2011, Brian Huffman wrote:

 After implementing the warning, I dug through the revision history and
 was surprised to find that my new feature actually used to exist! It
 was removed in January 2006:

 http://isabelle.in.tum.de/repos/isabelle/rev/ca56111fe69c

 I don't understand why the warning message was ever removed. The
 commit message fix_tac: no warning is unhelpful.

 This needs some further investigation.  My log message from 2006 is an
 example of how not to do it -- just parroting the change without any
 explanation.  (2006 was a bad year in Isabelle development.)

I can think of two potential reasons for dropping the warning:

First, since fix_tac is an exported function, it might be used by
other tactics besides induction. If the warning is triggered in other
contexts, the wording of the message might be inappropriate or
misleading.

Second, it is possible that induction with arbitrary variables might
be used by other ML packages for internal proofs. Warnings are
obviously not helpful for users if they were not caused by those
users' proof scripts.

 Does your own change do anything different from the old version?

The warning message is worded differently, but otherwise my changeset
is almost exactly the reverse of ca56111fe69c.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] NEWS - Redundant lemmas

2011-09-22 Thread Brian Huffman
On Thu, Sep 22, 2011 at 6:10 AM, Makarius makar...@sketis.net wrote:
 On Thu, 22 Sep 2011, Lukas Bulwahn wrote:

 On 09/22/2011 11:36 AM, Peter Lammich wrote:

 Perhaps we should start using a standardized process for phasing out
 legacy theorems, like moving them into a separate theory file
 Legacy.thy that would not be imported by default, and would be
 cleared out after each release. When upgrading Isabelle, users could
 import Legacy.thy as needed to get their theories working, and then
 work gradually to eliminate the dependencies on it. What do you think?

 If one tries to follow Brian's idea, even with the Legacy.thy, there is
 still no guarantees that by importing the theory, everything works as with
 the release before.
 For some incompatible changes, retaining compatibility within another
 theory is larger than the change itself.

 But for the special case of phasing out legacy theorems, it might work,
 but then I would only restrict this Legacy theory to selected theories
 (maybe everything within HOL-Plain or even less).

 This is an old problem, and we have some partial solutions to it that are
 reasonably established in Isabelle/ML at the least.

 It is generally hard to assemble all legacy stuff in a single central
 Legacy module (or theory), because legacy stuff also needs to observe
 certain dependencies.  In ML structure Misc_Legacy is close to that idea of
 central collection point, but it is only used maybe for 30% of the hard
 legacy stuff from Isabelle/Pure.  Apart from that, naming conventions like
 legacy_foo and the well-known legacy warnings are used to make clear what
 is the situation.

Yes, there are definitely some limitations with the Legacy.thy idea,
particularly the issue of dependencies: We obviously can't put the
legacy theorems from HOL-Plain, HOL, HOLCF, HOL-Multivariate_Analysis,
etc. all in the same place. I thought it might be useful just because
it would be trivial to implement.


 In Java, there is a deprecated flag for such issues. Isabelle could
 then issue
 a warning whenever using a deprecated lemma --- like the
 legacy-warnings
 it already issues when using some deprecated features (recdef, etc.)

 You are assuming that you could flag theorems, and all tools know what
 these flags should actually mean, and if they use them.
 That's technically quite difficult to achieve.

I actually like Peter's idea of a deprecated flag better than my
Legacy.thy idea. We might implement it by adding a new deprecated
flag to each entry in the naming type implemented in
Pure/General/name_space.ML. Deprecated names would be treated
identically to non-deprecated names, except that looking up a
deprecated name would cause a legacy warning message to be printed. I
don't think it would be necessary to modify any other tools or
packages.

Of course, we'd also need to invent some interface for marking names
as deprecated in the first place. It might also be nice to associate a
text string to each deprecated name (perhaps saying what other
name/feature to use instead) instead of just using a boolean flag.


 In principle we have similar flags for name space entries already, e.g.
 concealed.  Having an official legacy status (also in the Proper IDE) is
 probably easy to implement, but the main work is maintaining the actual
 annotations in the libraries.

Several theories in Isabelle/HOL have legacy theorem sections, so in
some sense we are already maintaining some annotations. I guess the
motivation for marking theorems as legacy (instead of deleting them
immediately) is to make it easier for users to adapt to their removal:
Deleting a theorem that has already been marked as legacy for a
release or two should be less disruptive than just deleting a theorem
suddenly. The problem is that this is currently not true! Users have
no indication (unless they read the sources) of whether they are using
a legacy theorem name. So right now, it seems like the legacy
theorem annotations are only useful as TODO comments to the other
developers, saying someone ought to delete these, eventually.

 My impression is that the traditional
 approach is to clear out old material quickly, so that the issue only arises
 in a weak sense.  (In contrast, the Java standard library is famous for
 being strictly monotonic, where deprecated stuff is never disposed.)

I don't think that anyone intends theorems marked as legacy to stay
around forever. Perhaps we should start a tradition of keeping legacy
theorems for one release only, completely purging all legacy
theorems from the libraries shortly after each release. If we
implemented a legacy-theorem warning message, then one release would
still be enough time to make the transition easier for users. (And
without a legacy-theorem warning, keeping legacy theorems around
longer before eventually removing them doesn't make things any easier,
so there should be no reason to wait longer than one release.)

- Brian
___

Re: [isabelle-dev] Fwd: status (AFP)

2011-09-18 Thread Brian Huffman
On Sun, Sep 18, 2011 at 1:42 AM, Gerwin Klein gerwin.kl...@nicta.com.au wrote:
 After not running in the test for quite some time, JinjaThreads now comes 
 back failing:

 *** Undeclared constant: semilattice_sup_class.sup
 *** At command definition (line 20 of 
 /home/kleing/afp/devel/thys/JinjaThreads/Execute/Cset_without_equal.thy)
 val it = (): unit
 Exception- TOPLEVEL_ERROR raised
 *** ML error

 It looks like something in the class setup changed slightly. Could somebody 
 who is more up-to-date in this area have a look, please?

This must be due to the recent changeset:

added syntactic classes for inf and sup
http://isabelle.in.tum.de/repos/isabelle/rev/5e51075cbd97

The fix is probably to replace semilattice_sup_class.sup with sup_class.sup.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] (Not) Using foundational theorems in proofs

2011-09-17 Thread Brian Huffman
On Fri, Sep 16, 2011 at 2:01 PM, Florian Haftmann
florian.haftm...@informatik.tu-muenchen.de wrote:
 Hi Brian,

 I am not totally happy with changes like:

 http://isabelle.in.tum.de/reports/Isabelle/rev/4657b4c11138#l1.7

 The proof text itself is shorter than before, but in less trivial
 examples it produces head ache to instantiate foundational theorems of
 locale with OF.  Indeed, huge parts of Finite_Set.thy once were written
 in that style.  The disadvantage there was that when you had to
 instantiate over a hierarchy of locales, you would find yourselves
 writing ... OF [... OF [...]] cascades.  From this I have drawn the
 conclusion that it is better to leave module system (locale +
 interpretation) and propositional system (_ == _ + _ [OF ...]) on their
 own, even if both are embedded into the same calculus.

Indeed, the proof script is a bit shorter, but that was not my primary
motivation for this change. For quite some time, my favored metric for
proof scripts has been not length, but overall execution time. By this
measure, most interpret commands within proof scripts are
extraordinarily expensive. Interpretations for some locales (like
semigroup or monoid in Groups.thy) are fast, because they are
intentionally kept small, few lemmas are declared with attributes,
etc. But for locales that come from type classes like
complete_lattice, the interpret command does a huge amount of
wasteful work: It has to generate copies of *every* lemma proved
within the class context, reprocess *all* the attribute declarations,
etc. The sheer quantity of duplicate-simp-rule warnings that come from
these interpret commands is an indicator of all the wasted work
going on.

On the other hand, I understand your comments about how the
locale+interpretation proof style is good because it maintains a layer
of abstraction: To understand the original proof scripts that use
interpret, users don't have to know that locales are implemented
with predicates and conditional theorems. My updated proof scripts
break this abstraction.

To get the best of both worlds (abstraction + decent, scalable
performance) it seems like it would be useful to have a more
lightweight alternative to the full-on locale interpretation
mechanism. Perhaps we could have a sort of lazy interpretation that
would let you do something like this:

context complete_lattice
begin

lazy_interpretation dual: complete_lattice Sup Inf sup op \ge op
 inf \top \bottom
  by (fact dual_complete_lattice)

lemma Sup_bot_conv [simp, no_atp]:
  \SqunionA = \bottom \longleftrightarrow (\forallx\inA. x
= \bottom)
  \bottom = \SqunionA \longleftrightarrow (\forallx\inA. x
= \bottom)
  by (rule dual.Inf_top_conv)+

end

The idea is that you wouldn't actually generate any theorems or
process any attributes when the interpretation is declared; the rule
dual.Inf_top_conv would instead be generated on-the-fly at the point
the name is used. (To be honest, when I first learned about locale
interpretations, this is actually how I assumed that they worked,
until I learned otherwise.)

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] inductive_set: Bad monotonicity theorem

2011-08-26 Thread Brian Huffman
On Fri, Aug 26, 2011 at 7:06 AM, Lawrence Paulson l...@cam.ac.uk wrote:
 I am trying to process the following declaration in Probability/Sigma_Algebra:

 inductive_set
  smallest_ccdi_sets :: ('a, 'b) algebra_scheme \Rightarrow 'a set set
  .
  .
  .
  monos Pow_mono

  I get the following error message (for the version with set types):

 *** Bad monotonicity theorem:
 *** {x. ?A x} \subseteq {x. ?B x} \Longrightarrow {x. Powp ?A x} 
 \subseteq {x. Powp ?B x}
 *** At command inductive_set (line 1125 of 
 /Users/lp15/isabelle_set/src/HOL/Probability/Sigma_Algebra.thy)

I came across this same problem when adapting Lazy-Lists-II to work
with a set type. Here is my changeset:

http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/3c3a4115e273

I got the idea for the workaround from the following line in
Predicates.thy from the distribution:

lemmas Powp_mono [mono] = Pow_mono [to_pred pred_subset_eq]

Hope this helps,

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation

2011-08-26 Thread Brian Huffman
On Fri, Aug 26, 2011 at 9:36 AM, Andreas Schropp schr...@in.tum.de wrote:
 On 08/26/2011 06:50 PM, Tobias Nipkow wrote:

 I agree. Since predicates are primitive, starting from them is
 appropriate.

 Tobias

 Did I get this right:
  the idea is to implement our advanced typedef via a HOL4-style predicate
 typedef?
 That doesn't work because HOL4-style typedefs don't feature our extensions
 to typedef and they are only conservative via a global theory
 transformation.

What exactly are our extensions to typedef?

As far as I understand, the typedef package merely issues global
axioms of the form type_definition Rep Abs S, as long as it is
provided with a proof of EX x. x : S.

My understanding of the localized typedef is that it works by
declaring a global typedef behind the scenes (much like a local
definition of a polymorphic constant works by declaring a global
polymorphic definition behind the scenes).

What am I missing here?

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation

2011-08-26 Thread Brian Huffman
On Thu, Aug 25, 2011 at 1:45 PM, Florian Haftmann
florian.haftm...@informatik.tu-muenchen.de wrote:
 HOL-Hoare_Parallel FAILED

 Also Hoare_Parallel is non-terminating.

I have a patch for this one:

http://isabelle.in.tum.de/repos/isabelle/rev/5e681762d538

Once this changeset gets merged into the isabelle_set repo,
Hoare_Parallel should start working.

Here's what happened: After a clarify step in a proof script,
Isabelle yields a variable called xb (renamed from x to avoid
clashes with pre-existing x and xa), while Isabelle_set calls the
same variable i. It turns out that the Isabelle_set is actually
doing the right thing here. There is a goal of the form x : (INT i:A.
B i)), and clarify applies the rule INT_I. In Isabelle_set, the new
variable name is inherited from the bound variable i in the goal.
But in the current version, the subgoal ends up in the eta-expanded
form x : (%a. (INT i:A. B i) a) and the name i doesn't get used.

To summarize: eta-expansion is a problem. Eta-expansion seems to
happen with induction (either from induct or rule), although the
eta-contract pretty-printing option tries to prevent people from
noticing (why does this option exist?!?) Switching to a separate set
type greatly reduces the number of opportunities for unwanted
eta-expansion.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation

2011-08-26 Thread Brian Huffman
On Fri, Aug 26, 2011 at 1:38 PM, Makarius makar...@sketis.net wrote:
 On Fri, 26 Aug 2011, Brian Huffman wrote:

 Here's one possible approach to the set-axiomatization/typedef issue:

 As a new primitive, we could have something like a pred_typedef
 operation that uses predicates. This would work just like the
 new_type_definition facility of HOL4, etc.

 The type 'a set could be introduced definitionally using pred_typedef.

 After type 'a set has been defined, we can implement the typedef
 command, preserving its current behavior, as a thin wrapper on top of
 pred_typedef.

 This approach would let us avoid having to axiomatize the 'a set type.
 Also, for those of us who like predicates, pred_typedef might be rather
 useful as a user-level command.

 What is gained from that apart from having two versions of typedef?

In the current version of Isabelle, not much (although I personally
think that a predicate-based variant of typedef would have value). But
assuming we go ahead and reintroduce 'a set as a separate type
again, this plan for typedef would reduce the number of primitive
concepts and axioms required to bootstrap HOL.

Of course, axioms like mem_Collect_eq and Collect_mem_eq are rather
uncontroversial, as axioms go. But if there is an easy way to avoid
declaring these as axioms, then keeping them as axioms seems a bit
gratuitous, in my opinion.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation

2011-08-26 Thread Brian Huffman
On Fri, Aug 26, 2011 at 1:23 PM, Andreas Schropp schr...@in.tum.de wrote:
 On 08/26/2011 10:38 PM, Makarius wrote:

 What is gained from that apart from having two versions of typedef?

 I am with you here.
 We don't need two primitive versions of typedefs.

This is incorrect: Only the predicate-based typedef need be primitive.
The set-based typedef can be implemented definitionally as a thin
layer on top of the predicate-based one.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] typedef

2011-08-26 Thread Brian Huffman
On Fri, Aug 26, 2011 at 1:34 PM, Andreas Schropp schr...@in.tum.de wrote:
 locale bla =
  assume False
  typedef empty = {}

 Now I have to put up with the fact the semantic interpretation of empty is
 the empty set. Formerly only non-empty sets were semantic interpretations of
 type constructors. The way around this is to localize derivations related to
 type constructor well-formedness, using False to forge all those crazy
 things.

As I understand it, the typedef in the above example will cause a
conditional axiom to be generated, something like this:

axiom type_definition_empty:
  False == type_definition Rep_empty Abs_empty {}

As far as I can tell, this axiom doesn't force you to use the empty
set to denote type empty in your set-theoretic model. In fact, you
can use any set you want to denote type empty, because the axiom
type_definition_empty doesn't assert anything about it at all!

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation

2011-08-22 Thread Brian Huffman
On Mon, Aug 22, 2011 at 8:01 AM, Lawrence Paulson l...@cam.ac.uk wrote:
 I've come across something strange in the file 
 isabelle/afp/devel/thys/DataRefinementIBP/Diagram.thy and was wondering if 
 anybody could think of an explanation.

 A proof works only if I insert before it the following:

 instance set :: (type) complete_boolean_algebra
 proof
 qed (auto simp add: INF_def SUP_def Inf_set_def Sup_set_def Inf_bool_def 
 Sup_bool_def)

 this is strange because this exact text already appears in the file 
 Complete_Lattice.thy (I refer to Florian's version of HOL), which is imported 
 by Main, which is indirectly imported by Diagram. And just in case I was 
 mistaken on this last point, I modified Diagram to import Main explicitly, 
 just to be sure. This instance declaration is still necessary.

 I just don't get this. I thought that an instance declaration lasted for ever.

DataRefinementIBP/Preliminaries.thy contains the following line:

class complete_boolean_algebra = distributive_complete_lattice + boolean_algebra

So your instance proof above is for class
Preliminaries.complete_boolean_algebra, while the instance in
Complete_Lattice.thy is for the separate
Complete_Lattice.complete_boolean_algebra class.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


[isabelle-dev] Feature request: print warning message when a definition shadows a previously-used name

2011-08-22 Thread Brian Huffman
Hello everyone,

Isabelle prints out warning messages whenever anyone declares a
duplicate simp rule, intro rule, etc. It would be nice if Isabelle
would print a similar warning whenever a definition reuses a name that
is already in scope in the current context. For example, if a theory
defines a class like this...

class complete_boolean_algebra = distributive_complete_lattice + boolean_algebra

...and a class called complete_boolean_algebra is already in scope,
then a warning message ought to be printed.

Such a warning message would be useful for constant names, lemma
names, etc. as well.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation

2011-08-12 Thread Brian Huffman
On Fri, Aug 12, 2011 at 2:27 AM, Alexander Krauss kra...@in.tum.de wrote:
 What are, actually, the losses when going back? This has not yet been
 touched by this thread at all (except the compatibility/import issue
 mentioned by Brian), and at least myself I wouldn't feel comfortable
 answering this question just now...

I think the simplest solution to the import issue is just to use
predicates for everything, and don't try to map anything onto the
Isabelle set type. For example, set union and intersection in
HOL-Light can translate to sup and inf on predicates in Isabelle.
Since Isabelle has pretty good support for predicates now, I think
this would work well enough.

Overall, I must say that I am completely in favor of making set a
separate type again. But I also believe that the sets=predicates
experiment was not a waste of time, since it forced us to improve
Isabelle's infrastructure for defining and reasoning about predicates:
Consider the new inductive package, complete_lattice type classes,
etc.

Perhaps this is mostly just a personal preference, but I am also in
favor of having the standard Isabelle libraries emphasize (curried)
predicates more than sets. We have been moving in this direction for
the past few years, and I think that is a good thing. (For example, I
think a transitive closure operator on type 'a = 'a = bool is more
valuable than on ('a * 'a) set.) I think that library support for
predicates is the main thing that would allow importing from other
HOLs to work well, regardless of whether 'a set is a separate type or
not.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] performance regression for simp_all

2011-08-12 Thread Brian Huffman
On Fri, Aug 12, 2011 at 4:07 PM, Makarius makar...@sketis.net wrote:
 http://isabelle.in.tum.de/repos/isabelle/rev/13e297dc improves startup
 time of the worker thread farm significantly, and I've got real times in the
 range of 0.003s -- 0.005s on my old machine from 2 years ago with Proof
 General.

Thanks for the quick response; with this new patch everything looks
much better. Proving True and True with simp_all in Proof General
now takes only 0.002s on my machine.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] (Re-)introducing set as a type constructor rather than as mere abbreviation

2011-08-11 Thread Brian Huffman
On Thu, Aug 11, 2011 at 5:43 AM, Florian Haftmann
florian.haftm...@informatik.tu-muenchen.de wrote:
 Hello together,

 recently in some personal discussions the matter has arisen whether
 there would be good reason to re-introduce the ancient set type
 constructor.  To open the discussion I have carried together some
 arguments.  I'm pretty sure there are further ones either pro or contra,
 for which this mailing list seems a good place to collect them.

One of the arguments in favor of identifying 'a set and 'a = bool
was for compatibility with other HOLs.

https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2008-December/msg00043.html

If we make them into separate types again, how will tools that import
proofs from other HOLs be affected?

I am interested in this question because I have written such a tool
myself (I hacked up an importer for Joe Hurd's OpenTheory format a
while back).

Florian: Is your modified Isabelle repo available for cloning, so we
can play with it? If so, I might be able to find an answer to my own
question...

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


[isabelle-dev] performance regression for simp_all

2011-08-11 Thread Brian Huffman
Hello all,

Recently I've been hacking on a bunch of proof scripts using the
development version of Isabelle, and I noticed that when processing
proof scripts, I often get a noticeable pause at uses of simp_all.
The same pause does not occur with Isabelle2011.

Below is a minimal theory file to demonstrate the problem; I ran it
with global timing turned on.

theory Scratch imports ~~/src/HOL/HOL begin

lemma shows True and True
by (simp, simp)
(* 0.001s elapsed time, 0.000s cpu time, 0.000s GC time *)

lemma shows True and True
by simp_all
(* 0.253s elapsed time, 0.004s cpu time, 0.000s GC time *)

I'm using an old, slow machine, so maybe 0.253s is a bit long compared
to what most people will see. On the other hand, in terms of the
slowdown ratio, 0.1s or even 0.05s would be pretty bad.

Here's what I learned by doing hg bisect:

The first bad revision is:
changeset:   42372:6cca8d2a79ad
user:wenzelm
date:Sat Apr 16 23:41:25 2011 +0200
summary: PARALLEL_GOALS for method simp_all;

http://isabelle.in.tum.de/repos/isabelle/rev/6cca8d2a79ad

Was this change supposed to *improve* performance? Was the performance
impact tested? Maybe the performance penalty only appears when
interactively stepping through proofs, and not in batch mode?

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] simplification theorems generated for records:

2011-07-28 Thread Brian Huffman
Your confusion stems from the fact that your theory file is also
called ms. The simps rules generated by the record command are
qualified by both the theory name and the type name, so the full names
in your example are ms.ms.simps and ms.aut.simps. Referring to
either of these by their full names works just fine.

When you use an incomplete qualified name like ms.simps, Isabelle
resolves this as the most recently defined name that matches. So at
the end of your example, ms.simps is resolved to ms.aut.simps.

- Brian

2011/7/28 Mamoun FILALI-AMINE fil...@irit.fr:

 Hello,

  The simplification theorems generated for records
 seems to be overwritten: in the following text,
 the first thm ms.simps prints the correct list
 while the second thm ms.simps prints the list for the
 preceding record.

 theory ms imports Main
 begin

 record ('st,'act) ms =
  i :: 'st


 thm ms.simps

 record ('st,'act) aut =
  m0 :: 'st

 thm ms.simps



 (I use  (Isabelle2011: January 2011))

 thanks

 Mamoun Filali



 ___
 isabelle-dev mailing list
 isabelle-...@in.tum.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] [isabelle] Countable instantiation addition

2011-07-21 Thread Brian Huffman
Hello everyone,

Attached is an implementation of a countable_typedef method that I
just wrote. (It works in a very similar manner to the
countable_datatype method.) Since records are implemented as
typedefs of product types, the same method works for record types as
well. I'd be happy to add this to Library/Countable.thy if anyone
wants me to.

I haven't had time to study Mathieu's ML library yet, but if it has
any features or capabilities that my system lacks, we should
definitely try to incorporate those into whatever ends up in the next
Isabelle release. I also need to remember to add an item to the NEWS
file letting people know that these new proof methods exist.

- Brian

On Thu, Jul 21, 2011 at 11:15 AM, Alexander Krauss kra...@in.tum.de wrote:
 Hi Matthieu,

 I have written a little ML library allowing to automatically prove, in
 most
 cases, instantiations of types (typedefs, records and datatypes) as
 countable
 (see ~~/src/HOL/Library/Countable).

 It seems that for datatypes we now have such functionality, implemented four
 weeks ago by Brian Huffman:

 http://isabelle.in.tum.de/repos/isabelle/rev/15df7bc8e93f

 It comes in the form of a method, so it has to be invoked explicitly, but
 like this it doesn't produce any penalty when it is not used. If you want to
 contribute an extension to records/typedefs, you are welcome, but you
 probably want to study Brian's implementation first. I assume that he is
 also the most appropriate person to review patches for this functionality.

 Alex
 ___
 isabelle-dev mailing list
 isabelle-...@in.tum.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev



Countable_Typedef.thy
Description: application/isabelle-theory
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Evaluation of floor and ceiling

2011-07-07 Thread Brian Huffman
I wrote to Florian about this exact issue back in February 2010.
Florian's recommended solution at the time was to add a new subclass
of archimedean_field that fixes the floor function and assumes a
correctness property about it. This solution is really easy to
implement (see attached diff). If people think this is a reasonable
design, then I'll let someone else go ahead and test and commit the
patch.

The drawback to this design is that it requires yet another type
class, of which we have plenty already. The other alternative would be
to make floor into a parameter of class archimedean_field, replacing
the current axiom ex_le_of_int:

OLD:
class archimedean_field = linordered_field + number_ring +
  assumes ex_le_of_int: \existsz. x \le of_int z

NEW:
class archimedean_field = ordered_field + number_ring +
  fixes floor :: 'a \Rightarrow int
  assumes floor_correct: of_int (floor x) \le x \and x  of_int
(floor x + 1)

One problem with this class definition is that floor_correct is a
stronger property than ex_le_of_int, and more difficult to prove
(especially for type real). To keep class instance proofs as easy as
they are now, Archimedean_Field.thy could provide a lemma like the
following:

lemma archimedean_field_class:
  assumes ex_le_of_int:
\And(x::'a::{ordered_field, number_ring}). \existsz. x \le of_int z
  assumes floor_def:
\And(x::'a::{ordered_field, number_ring}).
  floor x = (THE z. of_int z \le x \and x  of_int (z + 1))
  shows OFCLASS('a::{ordered_field, number_ring}, archimedean_field_class)

This second approach would be less trivial to implement, but it might
be worth it to keep everything in a single type class.

- Brian

On Thu, Jul 7, 2011 at 2:26 PM, Lukas Bulwahn bulw...@in.tum.de wrote:
 Hi all,


 Jasmin has pointed out that the evaluation of floor and ceiling showed
 surprising behaviour and I looked into the topic:

 If we are interested to enable evaluation of terms such as floor (5 / 6 ::
 rat) or ceiling (1 / 2 :: real), this will require different code
 equations for the different types -- hence the definition of floor and
 ceiling would have to be moved into the archimedian type class, and then one
 could provide actually different instantiations for the evaluation.
 This seems like a non-trivial refactoring.


 Is anyone interested to use evaluation for such terms which might motivate
 to do this refactoring?


 Lukas

 ___
 isabelle-dev mailing list
 isabelle-...@in.tum.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev

diff -r 47b0be18ccbe src/HOL/Archimedean_Field.thy
--- a/src/HOL/Archimedean_Field.thy	Thu Jul 07 23:33:14 2011 +0200
+++ b/src/HOL/Archimedean_Field.thy	Thu Jul 07 17:45:45 2011 -0700
@@ -141,9 +141,9 @@
 
 subsection {* Floor function *}
 
-definition
-  floor :: 'a::archimedean_field \Rightarrow int where
-  [code del]: floor x = (THE z. of_int z \le x \and x  of_int (z + 1))
+class floor_ceiling = archimedean_field +
+  fixes floor :: 'a \Rightarrow int
+  assumes floor_correct: of_int (floor x) \le x \and x  of_int (floor x + 1)
 
 notation (xsymbols)
   floor  (\lfloor_\rfloor)
@@ -151,9 +151,6 @@
 notation (HTML output)
   floor  (\lfloor_\rfloor)
 
-lemma floor_correct: of_int (floor x) \le x \and x  of_int (floor x + 1)
-  unfolding floor_def using floor_exists1 by (rule theI')
-
 lemma floor_unique: \lbrakkof_int z \le x; x  of_int z + 1\rbrakk \Longrightarrow floor x = z
   using floor_correct [of x] floor_exists1 [of x] by auto
 
@@ -273,7 +270,7 @@
 subsection {* Ceiling function *}
 
 definition
-  ceiling :: 'a::archimedean_field \Rightarrow int where
+  ceiling :: 'a::floor_ceiling \Rightarrow int where
   [code del]: ceiling x = - floor (- x)
 
 notation (xsymbols)
diff -r 47b0be18ccbe src/HOL/Rat.thy
--- a/src/HOL/Rat.thy	Thu Jul 07 23:33:14 2011 +0200
+++ b/src/HOL/Rat.thy	Thu Jul 07 17:45:45 2011 -0700
@@ -739,6 +739,20 @@
   qed
 qed
 
+instantiation rat :: floor_ceiling
+begin
+
+definition [code del]:
+  floor (x::rat) = (THE z. of_int z \le x \and x  of_int (z + 1))
+
+instance proof
+  fix x :: rat
+  show of_int (floor x) \le x \and x  of_int (floor x + 1)
+unfolding floor_rat_def using floor_exists1 by (rule theI')
+qed
+
+end
+
 lemma floor_Fract: floor (Fract a b) = a div b
   using rat_floor_lemma [of a b]
   by (simp add: floor_unique)
diff -r 47b0be18ccbe src/HOL/RealDef.thy
--- a/src/HOL/RealDef.thy	Thu Jul 07 23:33:14 2011 +0200
+++ b/src/HOL/RealDef.thy	Thu Jul 07 17:45:45 2011 -0700
@@ -793,6 +793,20 @@
 done
 qed
 
+instantiation real :: floor_ceiling
+begin
+
+definition [code del]:
+  floor (x::real) = (THE z. of_int z \le x \and x  of_int (z + 1))
+
+instance proof
+  fix x :: real
+  show of_int (floor x) \le x \and x  of_int (floor x + 1)
+unfolding floor_real_def using floor_exists1 by (rule theI')
+qed
+
+end
+
 subsection {* Completeness *}
 
 lemma not_positive_Real:
___

Re: [isabelle-dev] [isabelle] power2_sum outside of rings

2011-06-22 Thread Brian Huffman
Obviously, the naturals assign a non-standard meaning to negative numerals.

But are there any types that assign a non-standard meaning to
*positive* numerals? (By standard I mean 2=1+1, 3=1+1+1, 4=1+1+1+1,
etc.) Is there any reason why anyone would ever want to define or use
such a type?

If not - and if at some point in the future we switch over to unsigned
numerals - then I think it might be useful to do to class number
what we did with class power a while back: Replace the overloaded
constant with a single, standard definition.

In any case, I think such a major change would require a bit of
planning, and probably won't happen for a while. But I think that
adding a number_semiring class would be a step in the right direction,
and it would be easy enough to make that change right now.

- Brian

On Wed, Jun 22, 2011 at 12:21 PM, Lawrence Paulson l...@cam.ac.uk wrote:
 As I recall, the number class is for all types where numerals have a meaning. 
 Of course, it is a constituent of number_ring, to which many numeric types 
 belong, but not the naturals.
 Larry

 On 22 Jun 2011, at 19:55, Florian Haftmann wrote:

 A more drastic solution would be to just get rid of the number class
 altogether (its sole purpose seems to be so that you can have types
 where numerals have a non-standard meaning) and have a single
 definition of number_of that works uniformly for all types.

 This would indeed be helpful, but I guess the natural numbers are the
 show stopper.

 Of course we could also attempt to get rid of signed numerals ;-)

       Florian

 --

 Home:
 http://www.in.tum.de/~haftmann

 PGP available:
 http://home.informatik.tu-muenchen.de/haftmann/pgp/florian_haftmann_at_informatik_tu_muenchen_de

 ___
 isabelle-dev mailing list
 isabelle-...@in.tum.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev

 ___
 isabelle-dev mailing list
 isabelle-...@in.tum.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev

___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


[isabelle-dev] unhelpful low-level error message from primrec

2011-05-21 Thread Brian Huffman
Hello all,

I just noticed this error message from primrec if you write a
specification that is not primitive recursive. Here is a simplified
example:

primrec foo :: nat = nat where
  foo 0 = 1 | foo (Suc n) = foo 0

*** Extra variables on rhs: foo
*** The error(s) above occurred in definition foo_def:
***   foo \equiv \lambdaa. nat_rec 1 (\lambdan na. foo 0) a
*** At command primrec

Apparently, the primrec package gets as far as trying to register the
non-recursive definition; then the definition command fails, and the
error is not caught by primrec.

It might be nice to generate a more helpful error message in this
case, instead of one that originates from a lower-level tool.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Fwd: [isabelle] Problem with frule_tac substitution

2011-02-08 Thread Brian Huffman
The *_tac-style instantiation might be out of fashion, but the same
parsing rules for indexnames are also used with the where attribute,
which is still quite useful.

- Brian

On Tue, Feb 8, 2011 at 7:59 AM, Lawrence Paulson l...@cam.ac.uk wrote:
 Obviously this proposal would involve a significant incompatibility. It may 
 not even be very relevant any more, as this sort of instantiation is rather 
 out of fashion. But it is worth a discussion.
 Larry

 Begin forwarded message:

 I would propose to simplify the parsing rules to work like this: Any
 variable name with index 0 can be referred to without a question mark
 or dot, and a dot is always required for indices other than 0.

 x, ?x and ?x.0 parse as (x, 0)
 x0, ?x0 and ?x0.0 parse as (x0, 0)
 x2, ?x2 and ?x2.0 parse as (x2, 0)
 ?x.2 parses as (x, 2)

 ___
 isabelle-dev mailing list
 isabelle-...@in.tum.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev

___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Fwd: [isabelle] Problem with frule_tac substitution

2011-02-08 Thread Brian Huffman
On Tue, Feb 8, 2011 at 9:01 AM, Lawrence Paulson l...@cam.ac.uk wrote:
 Historically, the point is that index numbers were regarded as very important 
 in variable names, while identifiers ending with digits were not seen as 
 important. And there are other ways of making multiple identifiers. Nowadays, 
 there aren't that many situations where a user would need to refer to a 
 nonzero index number.
 Larry

I tried changing the parsing/printing of indexnames, to see how
serious of an incompatibility it would be. The diffs of
Syntax/lexicon.ML (parsing) and term.ML (printing) are pasted at the
bottom of this email.

I found that nothing in HOL-Main is affected by this change. A few
proof scripts below Complex_Main are affected, though, where the proof
script actually does need to refer to a nonzero index number. There
are two kinds of situations where this happens:

* Explicitly instantiating a rule made with THEN or OF, such as
apply (rule_tac ?a.1 = log a x in add_left_cancel [THEN iffD1])

* Instantiating a rule proved within an open-brace-close-brace block
in a structured proof. I was surprised by this one. For example:

lemma True
proof -
  { fix n :: nat
have n = n by simp
  }

this:
  ?n2 = ?n2

I expected this to have the form ?n = ?n, with index 0. But for
some reason, the actual rule uses index 2. Some proof scripts in
SEQ.thy use something like note foo = this, followed later with an
instantiation using the where attribute that refers to the nonzero
index.

- Brian


diff -r ab3f6d76fb23 src/Pure/Syntax/lexicon.ML
--- a/src/Pure/Syntax/lexicon.MLTue Feb 08 16:10:10 2011 +0100
+++ b/src/Pure/Syntax/lexicon.MLTue Feb 08 09:03:50 2011 -0800
@@ -297,21 +297,9 @@
   let
 fun nat n [] = n
   | nat n (c :: cs) = nat (n * 10 + (ord c - ord 0)) cs;
-
-fun idxname cs ds = (implode (rev cs), nat 0 ds);
-fun chop_idx [] ds = idxname [] ds
-  | chop_idx (cs as (_ :: \\^isub :: _)) ds = idxname cs ds
-  | chop_idx (cs as (_ :: \\^isup :: _)) ds = idxname cs ds
-  | chop_idx (c :: cs) ds =
-  if Symbol.is_digit c then chop_idx cs (c :: ds)
-  else idxname (c :: cs) ds;
-
-val scan = (scan_id  map Symbol_Pos.symbol) --
-  Scan.optional ($$$ . |-- scan_nat  (nat 0 o map
Symbol_Pos.symbol)) ~1;
   in
-scan 
-  (fn (cs, ~1) = chop_idx (rev cs) []
-| (cs, i) = (implode cs, i))
+(scan_id  (implode o map Symbol_Pos.symbol)) --
+  Scan.optional ($$$ . |-- scan_nat  (nat 0 o map Symbol_Pos.symbol)) 0
   end;

 in
diff -r ab3f6d76fb23 src/Pure/term.ML
--- a/src/Pure/term.ML  Tue Feb 08 16:10:10 2011 +0100
+++ b/src/Pure/term.ML  Tue Feb 08 09:03:50 2011 -0800
@@ -990,15 +990,8 @@
 fun string_of_vname (x, i) =
   let
 val idx = string_of_int i;
-val dot =
-  (case rev (Symbol.explode x) of
-_ :: \\^isub :: _ = false
-  | _ :: \\^isup :: _ = false
-  | c :: _ = Symbol.is_digit c
-  | _ = true);
   in
-if dot then ? ^ x ^ . ^ idx
-else if i  0 then ? ^ x ^ idx
+if i  0 then ? ^ x ^ . ^ idx
 else ? ^ x
   end;
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Sort Annotation vs. Class Context for Lemmas

2011-01-21 Thread Brian Huffman
On Fri, Jan 21, 2011 at 7:03 AM, Tjark Weber webe...@in.tum.de wrote:
 On Fri, 2011-01-21 at 15:27 +0100, Tobias Nipkow wrote:
 One issue can be that if some sort involves multiple type classes, the
 corresponding class may not have been defined. That is the one advantage
 of sorts: you can create conjunctions/intersection of type classes on
 the fly.

 If there is just one intersection of type classes, one should perhaps
 consider defining the corresponding class (before stating the lemma in
 its context).  The alternative is that the poor user who later feels
 that this class would be really useful has to re-prove every lemma.

 Of course, this doesn't work for lemmas that involve separate sorts.

I'm not sure if this is what you have in mind, but in HOLCF there is a
continuity predicate

cont :: ('a::cpo = 'b::cpo) = bool

which of course is impossible to define or reason about within the
context of any single class (unless you do some weird mixing of
locale- and class-based reasoning). The same goes for the mono
predicate in the main HOL libraries. So it is clear that locale
contexts are quite restrictive compared to stating lemmas with type
class constraints.

But anyway, let me step back a bit and ask a higher-level question:
Why do you need to prove lemmas *in* the context of a class at all?

Unless you are planning on doing locale interpretations of
class-locales (which are often a poor fit for where they are used;
search for %m n::nat. m dvd n  ~ n dvd m in GCD.thy for some
examples) I don't see any advantage to proving class-specific lemmas
inside the locale context.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


[isabelle-dev] erroneous Legacy feature warnings about secondary search path

2011-01-13 Thread Brian Huffman
Hello all,

I recently discovered that warnings about the secondard search path
are generated more often than they ought to be. I am using repository
version 1fa4725c4656.

Create two files A.thy and B.thy with the following contents:

theory A
imports ~~/src/HOL/Library/Cardinality
begin
end

theory B
imports A
begin
end

Theory A loads an arbitrary file from HOL/Library, which is included
in the legacy default search path, although the full path is given
here to avoid the Legacy feature warning.

If I load theory A by itself in ProofGeneral, it imports
Cardinality.thy without warning. Similarly, if I load theory B by
itself in ProofGeneral, there is no warning. But if I have both A.thy
and B.thy open in ProofGeneral, step through A.thy first, and *then*
start stepping through B.thy, I get the following warning when I
import A from B:

### Legacy feature! File Cardinality.thy located via secondary
search path: $ISABELLE_HOME/src/HOL/Library

Can anyone explain this behavior?

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Isabelle release

2011-01-06 Thread Brian Huffman
On Thu, Jan 6, 2011 at 8:59 AM, Makarius makar...@sketis.net wrote:
 If everybody else manages to wrap up until the beginning of next week, we
 have a good chance to release before the end of the month.

This sounds good.

 I think a release date of January 2011 still justifies to call the release
 Isabelle2010.

I guess naming it Isabelle2010 rather than Isabelle2009-3
signifies that this is a major, rather than a minor release.

What exactly makes it major? Judging by the NEWS file, it looks like
2009-2 introduced about as many new features as the upcoming release
will. Is there any new feature in particular that is considered a
major change?

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Additional type variable(s) in specification

2010-12-02 Thread Brian Huffman
On Thu, Dec 2, 2010 at 8:42 AM, Makarius makar...@sketis.net wrote:
 Right now there is already a clear warning -- which Proof General happens to
 show in a hardly visible way, but that is just one of many problems of Proof
 General.  In Isabelle/jEdit the warning is hard to miss. I will check again
 about the other related messages to increase the pontential further.

Making the warning message more visible is only a partial solution.
The wording of the warning itself is also a problem:

Additional type variable(s) in specification

Users who encounter this warning because of a mistake in a definition
are unlikely to know what it means (unless they have read about it on
the mailing list). More importantly, the warning message doesn't give
any hints about how to fix the mistake.

- Brian
___
isabelle-dev mailing list
isabelle-...@in.tum.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] theorem induct

2010-11-30 Thread Brian Huffman
On Tue, Nov 30, 2010 at 8:35 AM, Florian Haftmann
florian.haftm...@informatik.tu-muenchen.de wrote:
 indeed, both theorems are the same, just with different accesses;
 recently, we introduced the concept of mandatory qualifiers to avoid the
 strange base accesses (e.g. »induct«, »simps«, »intros«), but this has
 never been systematically introduces into existing packages.

 Is there any reason why we shouldn't update the datatype package right
 now, to make the qualifier mandatory on foo.induct?

 Maybe this is a good opportunity.  Maybe this should be done to every
 package which has no specific maintainer.

Attached is a mercurial changeset for adding mandatory qualifiers to
theorems generated by (rep_)datatype. Here are the affected theorem
names (that I know of):

induct
inject
distinct
exhaust
cases
split
split_asm
weak_case_cong
nchotomy
case_cong

I've only tested this with HOL-Main; I'll let someone else test it
more thoroughly and decide whether or not to commit it.

- Brian
# HG changeset patch
# User huffman
# Date 1291143768 28800
# Node ID 987a4e7a7a206d2bc52c05e6d323ffdc3282cccd
# Parent  a3af470a55d29db2952d136352af50a9add50c05
theorem names generated by the (rep_)datatype command now have mandatory qualifiers

diff -r a3af470a55d2 -r 987a4e7a7a20 src/HOL/Product_Type.thy
--- a/src/HOL/Product_Type.thy	Tue Nov 30 18:40:23 2010 +0100
+++ b/src/HOL/Product_Type.thy	Tue Nov 30 11:02:48 2010 -0800
@@ -160,7 +160,7 @@
 
 declare prod.simps(2) [nitpick_simp del]
 
-declare weak_case_cong [cong del]
+declare prod.weak_case_cong [cong del]
 
 
 subsubsection {* Tuple syntax *}
@@ -440,7 +440,7 @@
 
 lemma split_weak_cong: p = q \Longrightarrow split c p = split c q
   -- {* Prevents simplification of @{term c}: much faster *}
-  by (fact weak_case_cong)
+  by (fact prod.weak_case_cong)
 
 lemma cond_split_eta: (!!x y. f x y = g (x, y)) == (%(x, y). f x y) = g
   by (simp add: split_eta)
@@ -578,7 +578,7 @@
   \medskip @{term split} used as a logical connective or set former.
 
   \medskip These rules are for use with @{text blast}; could instead
-  call @{text simp} using @{thm [source] split} as rewrite. *}
+  call @{text simp} using @{thm [source] prod.split} as rewrite. *}
 
 lemma splitI2: !!p. [| !!a b. p = (a, b) == c a b |] == split c p
   apply (simp only: split_tupled_all)
diff -r a3af470a55d2 -r 987a4e7a7a20 src/HOL/Tools/Datatype/datatype.ML
--- a/src/HOL/Tools/Datatype/datatype.ML	Tue Nov 30 18:40:23 2010 +0100
+++ b/src/HOL/Tools/Datatype/datatype.ML	Tue Nov 30 11:02:48 2010 -0800
@@ -628,9 +628,9 @@
 
 val ([dt_induct'], thy7) =
   thy6
-  | Sign.add_path big_name
-  | Global_Theory.add_thms [((Binding.name induct, dt_induct), [case_names_induct])]
-  || Sign.parent_path
+  | Global_Theory.add_thms
+[((Binding.qualify true big_name (Binding.name induct), dt_induct),
+  [case_names_induct])]
   || Theory.checkpoint;
 
   in
diff -r a3af470a55d2 -r 987a4e7a7a20 src/HOL/Tools/Datatype/datatype_aux.ML
--- a/src/HOL/Tools/Datatype/datatype_aux.ML	Tue Nov 30 18:40:23 2010 +0100
+++ b/src/HOL/Tools/Datatype/datatype_aux.ML	Tue Nov 30 11:02:48 2010 -0800
@@ -98,18 +98,18 @@
 
 fun store_thmss_atts label tnames attss thmss =
   fold_map (fn ((tname, atts), thms) =
-Sign.add_path tname
-# Global_Theory.add_thmss [((Binding.name label, thms), atts)]
-#- (fn thm::_ = Sign.parent_path # pair thm)) (tnames ~~ attss ~~ thmss)
+Global_Theory.add_thmss
+  [((Binding.qualify true tname (Binding.name label), thms), atts)]
+#- (fn thm::_ = pair thm)) (tnames ~~ attss ~~ thmss)
   ## Theory.checkpoint;
 
 fun store_thmss label tnames = store_thmss_atts label tnames (replicate (length tnames) []);
 
 fun store_thms_atts label tnames attss thmss =
   fold_map (fn ((tname, atts), thms) =
-Sign.add_path tname
-# Global_Theory.add_thms [((Binding.name label, thms), atts)]
-#- (fn thm::_ = Sign.parent_path # pair thm)) (tnames ~~ attss ~~ thmss)
+Global_Theory.add_thms
+  [((Binding.qualify true tname (Binding.name label), thms), atts)]
+#- (fn thm::_ = pair thm)) (tnames ~~ attss ~~ thmss)
   ## Theory.checkpoint;
 
 fun store_thms label tnames = store_thms_atts label tnames (replicate (length tnames) []);
diff -r a3af470a55d2 -r 987a4e7a7a20 src/HOL/Tools/Datatype/datatype_data.ML
--- a/src/HOL/Tools/Datatype/datatype_data.ML	Tue Nov 30 18:40:23 2010 +0100
+++ b/src/HOL/Tools/Datatype/datatype_data.ML	Tue Nov 30 11:02:48 2010 -0800
@@ -362,14 +362,14 @@
   let
 val raw_distinct = (map o maps) (fn thm = [thm, thm RS not_sym]) half_distinct;
 val new_type_names = map Long_Name.base_name (the_default dt_names alt_names);
+val prfx = Binding.qualify true (space_implode _ new_type_names);
 val (((inject, distinct), [induct]), thy2) =
   thy1
   | store_thmss inject new_type_names raw_inject
   || store_thmss distinct new_type_names raw_distinct
-  || Sign.add_path (space_implode _ 

[isabelle-dev] AFP/Shivers-CFA latex document problem

2010-11-19 Thread Brian Huffman
On Fri, Nov 19, 2010 at 8:58 AM, Tobias Nipkow nip...@in.tum.de wrote:
 I don't understand this: why does latex work for the release version but
 not the development version? But indeed, it fails for me too.

I created an extremely simplified version of the Shivers-CFA entry
(building on Isabelle/Pure instead of HOLCF) that still reproduces the
latex error, and did some bisection. The origin of the problem is this
revision:

http://isabelle.in.tum.de/repos/isabelle/rev/b646316f8b3c
basic setup for literal replacement text in PDF, to support copy/paste
of Isabelle symbols;

The root.tex of the Shivers-CFA entry contains the following two lines:

\usepackage[normalem]{ulem}
\newcommand{\isactrlps}[1]{{\uline #1}}

Here is the stripped-down version of the entry I used for testing:

theory Computability
imports Pure
begin

definition foo :: 'a = 'a (\^ps) where
  \^psf == f

end

... and here is the diff of the generated Computability.tex files,
comparing revision b646316f8b3c (+) to its parent revision (-).

 \isanewline
 \isanewline
 \isacommand{definition}\isamarkupfalse%
-\ foo\ {\isacharcolon}{\isacharcolon}\
{\isachardoublequoteopen}{\isacharprime}a\ {\isasymRightarrow}\
{\isacharprime}a{\isachardoublequoteclose}\
{\isacharparenleft}{\isachardoublequoteopen}\isactrlps
{\isachardoublequoteclose}{\isacharparenright}\
\isakeyword{where}\isanewline
-\ \ {\isachardoublequoteopen}\isactrlps f\
{\isacharequal}{\isacharequal}\ f{\isachardoublequoteclose}\isanewline
+\ foo\ {\isaliteral{3A}{\isacharcolon}}{\isaliteral{3A}{\isacharcolon}}\
{\isaliteral{22}{\isachardoublequoteopen}}{\isaliteral{27}{\isacharprime}}a\
{\isaliteral{5C3C52696768746172726F773E}{\isasymRightarrow}}\
{\isaliteral{27}{\isacharprime}}a{\isaliteral{22}{\isachardoublequoteclose}}\
{\isaliteral{28}{\isacharparenleft}}{\isaliteral{22}{\isachardoublequoteopen}}\isaliteral{5C3C5E70733E}{}\isactrlps
{\isaliteral{22}{\isachardoublequoteclose}}{\isaliteral{29}{\isacharparenright}}\
\isakeyword{where}\isanewline
+\ \ 
{\isaliteral{22}{\isachardoublequoteopen}}\isaliteral{5C3C5E70733E}{}\isactrlps
f\ {\isaliteral{3D}{\isacharequal}}{\isaliteral{3D}{\isacharequal}}\
f{\isaliteral{22}{\isachardoublequoteclose}}\isanewline
 %
 \isadelimtheory
 \isanewline

Finally, here are the relevant LaTeX error messages:

*** (./session.tex (./Computability.tex
*** ! Undefined control sequence.
*** argument \...@spfactor
***
*** l.27 ...\isaliteral{22}{\isachardoublequoteclose}}
***
{\isaliteral{29}{\isacharp...
***
*** ! Undefined control sequence.
*** \...@word ...kip \unskip \spacefactor \...@spfactor
***   \let \...@word
\egroup \els...
*** l.27 ...\isaliteral{22}{\isachardoublequoteclose}}
***
{\isaliteral{29}{\isacharp...
***
*** ! Missing number, treated as zero.
*** to be read again
***\let
*** l.27 ...\isaliteral{22}{\isachardoublequoteclose}}
***
{\isaliteral{29}{\isacharp...
***
*** ! Bad space factor (0).
*** to be read again
***\let
*** l.27 ...\isaliteral{22}{\isachardoublequoteclose}}
***
{\isaliteral{29}{\isacharp...
***
*** ))

- Brian

 Brian Huffman schrieb:
 On Fri, Nov 19, 2010 at 7:28 AM, Brian Huffman bri...@cs.pdx.edu wrote:
 The latex document generation still doesn't work, though (at least on
 my machine), and I don't understand the error messages that it
 produces. Maybe it has something to do with the non-ascii unicode
 characters in some of the files?

 I determined that the latex errors were due to the \uline command,
 which is defined in the ulem latex package. If I replace {\uline
 foo} with \underline{foo} throughout the sources, then the document
 generation works. However, I'm not sure whether the typesetting in the
 output is the same for the two commands.

 I assume Joachim tested the latex output before submitting the entry,
 so I wonder if my latex installation has a different version of the
 ulem package? If the document generation is sensitive to package
 versions, it might be a good idea to put a copy of ulem.sty in the
 document directory of the AFP entry.

 Also, it appears that the unicode apostrophes in the source files
 don't cause any latex errors, although they are silently omitted from
 the output. It would probably be a good idea to change them all to
 ordinary ascii apostrophes.

 - Brian

___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] AFP/Shivers-CFA latex document problem

2010-11-19 Thread Brian Huffman
On Fri, Nov 19, 2010 at 2:40 PM, Joachim Breitner
m...@joachim-breitner.de wrote:
 Hi everyone,
[...]
 I’m sorry for the trouble my submission causes. When writing the
 theories I had planned to actually use the proof document as my project
 report, therefore the trouble I went through to have the syntax as
 similar to Shivers’s dissertation as possible. In the end I was asked to
 not use the proof document as the report, so these syntax are not really
 required any more.

Hi Joachim,

Don't worry about any of this document-generation trouble; none of it
was your fault. Fancy report-quality LaTeX syntax is supposed to be a
supported feature of Isabelle document generation. The incompatibility
between the ulem package and the development version of Isabelle is
unfortunate, and hopefully workarounds can be found so that users can
continue to use ulem and other LaTeX packages in the future.

 If you want I can completely get rid of them in the developer repository
 of the AFP. Same with unicode apostrophes.

 I used \ulem because I recently worked on a project that required
 well-typeset text, where it has advantages over \underline, such that it
 stills allows hyphenation. In this case, these extra features are
 unnecessary and \underline can be used as well.

I went ahead and changed the \isactrlps command (defined in your
root.tex) to use \underline instead of \uline. (Other uses of \uline
apparently don't cause problems.)

http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/2423a888e2e7

I also took the liberty of replacing all the unicode quotes with ascii
ones (otherwise the quotes don't appear in the pdf at all) and fixing
various typos and misspellings that I found along the way. Hopefully I
haven't introduced any errors; you might want to inspect the changeset
to make sure:

http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/531cf578b0d7

 And while I am at it: Although I knew that technically, HOLCF is but an
 conservative extension to HOL, I concluded from its listing on the
 homepage parallel to HOL that it should be “considered” an alternative,
 and not just a library. Maybe this was also the reasons for my advisors
 to initially advice against the use of it. If that is not the intention
 I support Brian’s suggestions to move it next to other HOL libraries.

 Greetings,
 Joachim

Thanks for deciding to stick with HOLCF, even despite the initial
advice of your advisors. Having more users makes all my work on
improving HOLCF seem worthwhile! And if the way HOLCF is presented in
the Isabelle documentation and website was the reason for your
advisors' reluctance, that is more motivation for me to move HOLCF
into a more logical place. I will make sure this is done before the
next release.

- Brian
___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


[isabelle-dev] find_theorems raises UnequalLengths exception

2010-11-18 Thread Brian Huffman
Hello everyone,

Recently I noticed that in HOLCF, whenever I do a theorem search for
theorems containing the constant UU (i.e. bottom), the search fails
with an UnequalLengths exception. I tracked the problem down to this
specific theorem from Fun_Cpo.thy: Before this point, find_theorems
UU succeeds, and afterward it causes an error.

lemma app_strict: UU x = UU

I found that I can also reproduce the problem directly in HOL:

theory Scratch
imports Orderings
begin

find_theorems bot

lemma bot_apply: bot x = bot
by (simp only: bot_fun_eq)

find_theorems bot

*** exception UnequalLengths raised
*** At command find_theorems

After doing hg bisect, I traced the origin of the problem:

http://isabelle.in.tum.de/repos/isabelle/rev/b654fa27fbc4

Can anyone figure this out?

- Brian
___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] find_theorems raises UnequalLengths exception

2010-11-18 Thread Brian Huffman
On Thu, Nov 18, 2010 at 8:25 AM, Lawrence Paulson l...@cam.ac.uk wrote:
 That is certainly my change, but I don't understand why preventing 
 self-referential type instantiations should affect the find_theorems 
 function. Can you get a full trace back from the exception?

Here's what I get from turning on the debugging flag in Proof General:

Exception trace for exception - UnequalLengths
Library.~~(2)
Pattern.match(3)cases(5)rigrig1(2)
Pattern.match(3)cases(5)
Pattern.match(3)mtch(4)
Pattern.match(3)
Pattern.matches_subterm(3)msub(2)
Pattern.matches_subterm(3)msub(2)
Pattern.matches_subterm(3)
Find_Theorems.filter_pattern(2)check(3)
Find_Theorems.filter_pattern(2)check(1)
o(2)(1)
Find_Theorems.app_filters(1)app(3)
List.mapPartial(2)
List.mapPartial(2)
List.mapPartial(2)
List.mapPartial(2)
List.mapPartial(2)
List.mapPartial(2)
List.mapPartial(2)
List.mapPartial(2)
Find_Theorems.sorted_filter(2)
Find_Theorems.find_theorems(5)find_all(2)
Find_Theorems.find_theorems(5)find_all(1)
Find_Theorems.print_theorems(5)
Toplevel.apply_tr(3)(1)
Runtime.debugging(2)(1)
End of trace

*** exception UnequalLengths raised
*** At command find_theorems



 Larry

 On 18 Nov 2010, at 16:03, Brian Huffman wrote:

 Hello everyone,

 Recently I noticed that in HOLCF, whenever I do a theorem search for
 theorems containing the constant UU (i.e. bottom), the search fails
 with an UnequalLengths exception. I tracked the problem down to this
 specific theorem from Fun_Cpo.thy: Before this point, find_theorems
 UU succeeds, and afterward it causes an error.

 lemma app_strict: UU x = UU

 I found that I can also reproduce the problem directly in HOL:

 theory Scratch
 imports Orderings
 begin

 find_theorems bot

 lemma bot_apply: bot x = bot
 by (simp only: bot_fun_eq)

 find_theorems bot

 *** exception UnequalLengths raised
 *** At command find_theorems

 After doing hg bisect, I traced the origin of the problem:

 http://isabelle.in.tum.de/repos/isabelle/rev/b654fa27fbc4

 Can anyone figure this out?

 - Brian
 ___
 Isabelle-dev mailing list
 Isabelle-dev@mailbroy.informatik.tu-muenchen.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] informative changelogs / typedef (open) unit

2010-11-18 Thread Brian Huffman
The effect of my change typedef (open) unit is to remove the
definition of the following constant

unit_def: unit == {True}

thus making the name unit available to users.

- Brian

On Thu, Nov 18, 2010 at 1:58 PM, Makarius makar...@sketis.net wrote:
 Hi Brian,

 can you say a few words about b994d855dbd2 just for the historical record?

 Such changes deep down in HOL easily cause odd problems later on, so the one
 doing the bisection in some years might need more info via the mail archive.


 I would also like to take the opportunity to recall our most basic changelog
 conventions:

  each item on a separate line;
  items ordered roughly by importance;

 For some reason, many people have started to sequeze everything in a single
 line (frequent), or imitate the headline/body text format of other projects
 with a completely different structure (rare).  The reason might be as
 profane as the default web style of Mercurial, but at least on
 http://isabelle.in.tum.de/repos/isabelle the display follows the usual
 Isabelle format.

 I spend a good portion of my time inspecting changesets, not just the
 incoming ones, but also really old ones when sorting out problems. So the
 little extra care when composing changelogs will help a lot in the overall
 process.

 The recent crash of find_theorems due to b654fa27fbc4 is just one example
 that changes deep in the system need routine reviewing.  It's on my list for
 several weeks, but I did not find the time to look at it so far.


        Makarius
 ___
 Isabelle-dev mailing list
 Isabelle-dev@mailbroy.informatik.tu-muenchen.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev

___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] typedef (open) unit

2010-11-18 Thread Brian Huffman
On Thu, Nov 18, 2010 at 2:59 PM, Makarius makar...@sketis.net wrote:
 On Thu, 18 Nov 2010, Brian Huffman wrote:

 The effect of my change typedef (open) unit is to remove the
 definition of the following constant

 unit_def: unit == {True}

 thus making the name unit available to users.

 So this is a name space thing only?  Maybe hide_const would do the job.

 Historically, we could not hide it because it was global, but Florian has
 purged that recently.

If you think it is useful to have Isabelle/HOL provide a constant
Product_Type.unit == {True}, then by all means put it back in.

- Brian
___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] informative changelogs / typedef (open) unit

2010-11-18 Thread Brian Huffman
On Thu, Nov 18, 2010 at 1:58 PM, Makarius makar...@sketis.net wrote:
 For some reason, many people have started to sequeze everything in a single
 line (frequent), or imitate the headline/body text format of other projects
 with a completely different structure (rare).  The reason might be as
 profane as the default web style of Mercurial, but at least on
 http://isabelle.in.tum.de/repos/isabelle the display follows the usual
 Isabelle format.

It seems that viewing the changelog style preserves the line-breaks
in commit messages.

http://isabelle.in.tum.de/repos/isabelle/log/

But the shortlog style doesn't; it concatenates everything onto one line.

http://isabelle.in.tum.de/repos/isabelle/shortlog/

If most other Isabelle developers are like me, they probably look at
the shortlog style most of the time, and the style of commit
messages imitates what they see on the web interface.

If you want people to use linebreaks in their commit messages, it
would probably be a good idea to actually display them in the
shortlog view. Maybe you could add the addbreaks filter to the
shortlog theme in hgweb?

http://mercurial.selenic.com/wiki/Theming

- Brian
___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Transparent/opaque module signature ascription

2010-11-12 Thread Brian Huffman
On Fri, Nov 12, 2010 at 5:48 AM, Makarius makar...@sketis.net wrote:
 On Thu, 11 Nov 2010, Brian Huffman wrote:
 Here is the reason I am reluctant to use transparent ascription:
 Programmers use modules and signatures as an abstraction mechanism. (I
 shouldn't need to explain to anyone on this list why abstraction in
 programming is a good thing.) But transparent ascription makes it easy to
 accidentally break module abstractions: If signature FOO contains an
 abstract type like type foo (with no definition in the signature), and
 structure Foo implements it with a type synonym like type foo = int, then
 the ascription Foo : FOO will make Foo.foo = int globally visible,
 violating the abstraction specified in the signature and breaking
 modularity.

 The way signatures and structures are used in Isabelle is more like table
 of contents vs. body text.  I.e. the signature tells about intended
 exports without necessarily abstracting the representation fully.  There are
 some modules that need to be fully abstract, and this is where abstype is
 used with plain-old : matching.

So basically, you're saying that you normally don't care about
abstraction, except in the few special cases where you use abstype.
Maybe I do need to explain why abstraction is a good thing...

 Moreover, in recent years we did
 narrow-down the signatures more systematically, to delimited the boundaries
 of modules more clearly, although some people have occasionally complained
 about that.

The way to avoid such complaining is to make all modules as abstract
as possible in the first place. Of course, retrofitting a
more-abstract interface onto an existing module will cause some pain
for users of that module, who may have written problematic code that
doesn't follow the abstract discipline. But better now than later,
because otherwise users will continue to write more undisciplined,
error-prone code.

 When SML90 was young, other ways of module abstraction were propagated by
 some authors of text books.  I vaguely remember the functorial style that
 was still present in our sources in the early 1990-ies, and greatly
 complicating things until Larry purged it in one big sweep.

The functorial style may have been cumbersome, but at least it
enforced module abstractions.

 When SML97 came out, we adopted few of its features and ignored many new
 problems introduced by the general NJ approach to ML.
[...]
 This demonstrates once more, that anything coming from the 97/NJ update of
 the SML language needs to be treated with extreme care.

I'm afraid your ad hominem attacks on the SML97/NJ group don't carry
much weight with me.

- Brian
___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Datatype alt_names

2010-11-03 Thread Brian Huffman
On Wed, Nov 3, 2010 at 5:56 AM, Florian Haftmann
florian.haftm...@informatik.tu-muenchen.de wrote:
 Traditionally the datatype command would accept optional alternative
 names used in names of type-related facts etc., e.g.

 datatype (foo) /*/ = Bar

 With all HOL types being regulary named, the question arises whether we
 still have to keep that feature or shall just discontinue it?

        Florian

I brought up this issue on the mailing list last year:

https://mailmanbroy.informatik.tu-muenchen.de/pipermail/isabelle-dev/2009-May/000578.html

I have since removed a similar feature from the HOLCF domain package.

It seems to me that such a feature could only be justified if it was
needed for backward compatibility. But since this feature was broken
in more than one released version of Isabelle, and nobody has ever
complained about it, backward compatibility is not an issue for
anyone.

- Brian

P. S.

The 'typedef' command supports a similar option for alternative names;
I am sure that it was originally created for use with non-alphanumeric
type names. One could also ask whether the existence of this feature
for typedef is still justified, when all types have regular names now.
___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] NEWS: expand_fun_eq - ext_iff, expand_set_eq - set_ext_iff

2010-09-07 Thread Brian Huffman
The log message, Naming in line now with multisets seems to suggest
that consistency was the main motivation for this change. However, the
change had rather the opposite effect, since the expand_*_eq pattern
is much more common in the libraries. (Full disclosure: some, but not
all, of these lemmas were named by me.)

HOL/Complex.thy: expand_complex_eq
HOL/Limits.thy: expand_net_eq
HOL/Quotient_Examples/FSet.thy: expand_fset_eq
HOL/Library/Product_plus.thy: expand_prod_eq
HOL/Library/Formal_Power_Series.thy: expand_fps_eq
HOL/Library/Polynomial.thy: expand_poly_eq
HOLCF/Cfun.thy: expand_cfun_eq

Meanwhile, multiset_ext_iff was the only example of the *_ext_iff form
I could find.

If naming consistency was the primary goal here, then I think it would
have been much better to just rename multiset_ext_iff to
expand_multiset_eq. (Alternatively, you could try to consistently
change everything to the ext_iff style, but complex_ext_iff or
prod_ext_iff would be a bit weird.)

Personally, I also like the expand_*_eq naming style because it is
descriptive: When you see simp add: expand_fun_eq in a proof, there
is no ambiguity about which way the rule is oriented.

On the other hand, the ext_iff names are shorter.

- Brian


On Tue, Sep 7, 2010 at 3:06 AM, Tobias Nipkow nip...@in.tum.de wrote:

 ___
 Isabelle-dev mailing list
 Isabelle-dev@mailbroy.informatik.tu-muenchen.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev

___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] NEWS: expand_fun_eq - ext_iff, expand_set_eq - set_ext_iff

2010-09-07 Thread Brian Huffman
OK, this makes sense. It is nice to have the pair of lemmas foo_ext
and foo_ext_iff for each function-like type foo.

So do you propose that we rename all of the expand_*_eq lemmas that I
listed before? (It would also be good to make sure there is a
properly-named *_ext lemma corresponding to each one.)

We should also come up with a consistent naming scheme for the
analogous lemmas for product-like types (incl. prod and complex). Here
are the names currently in use:

fst a = fst b == snd a = snd b == a = b  prod_eqI
a = b -- fst a = fst b /\ snd a = snd b  Pair_fst_snd_iff,
expand_prod_eq

Re x = Re y == Im x = Im y == x = y  complex_equality
x = y -- Re x = Re y /\ Im x = Im y  complex_Re_Im_cancel_iff,
expand_complex_eq

The names {prod,complex}_ext and {prod,complex}_ext_iff would be a bit
strange. But how about {prod,complex}_eqI and {prod,complex}_eq_iff ?

I'm open to other suggestions, but I would like to be rid of the names
Pair_fst_snd_iff (a bit long, and the lemma doesn't even mention the
Pair constructor) and complex_Re_Im_cancel_iff (also long, and I
don't know what cancel is supposed to mean here).

- Brian


On Tue, Sep 7, 2010 at 9:45 AM, Tobias Nipkow nip...@in.tum.de wrote:
 I wanted to emphasize the mathematical concept, not the operational view.
 And indeed, it is shorter. For functional objects the ext_iff convention
 fits perfectly. For example, for polynomials we already have poly_ext in one
 direction. I have to admit, though, that for pairs it would be a bit
 unusual, although it fits a certain view of pairs.

 Tobias

 Am 07/09/2010 17:22, schrieb Brian Huffman:

 The log message, Naming in line now with multisets seems to suggest
 that consistency was the main motivation for this change. However, the
 change had rather the opposite effect, since the expand_*_eq pattern
 is much more common in the libraries. (Full disclosure: some, but not
 all, of these lemmas were named by me.)

 HOL/Complex.thy: expand_complex_eq
 HOL/Limits.thy: expand_net_eq
 HOL/Quotient_Examples/FSet.thy: expand_fset_eq
 HOL/Library/Product_plus.thy: expand_prod_eq
 HOL/Library/Formal_Power_Series.thy: expand_fps_eq
 HOL/Library/Polynomial.thy: expand_poly_eq
 HOLCF/Cfun.thy: expand_cfun_eq

 Meanwhile, multiset_ext_iff was the only example of the *_ext_iff form
 I could find.

 If naming consistency was the primary goal here, then I think it would
 have been much better to just rename multiset_ext_iff to
 expand_multiset_eq. (Alternatively, you could try to consistently
 change everything to the ext_iff style, but complex_ext_iff or
 prod_ext_iff would be a bit weird.)

 Personally, I also like the expand_*_eq naming style because it is
 descriptive: When you see simp add: expand_fun_eq in a proof, there
 is no ambiguity about which way the rule is oriented.

 On the other hand, the ext_iff names are shorter.

 - Brian


 On Tue, Sep 7, 2010 at 3:06 AM, Tobias Nipkownip...@in.tum.de  wrote:

 ___
 Isabelle-dev mailing list
 Isabelle-dev@mailbroy.informatik.tu-muenchen.de

 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] [isabelle] safe for boolean equality

2010-06-15 Thread Brian Huffman
On Tue, Jun 15, 2010 at 2:03 AM, Lawrence Paulson l...@cam.ac.uk wrote:
 Altering the behaviour of the safe method on locale constants might be 
 feasible, because it would affect only a few proofs. Can anybody tell me 
 whether there is an easy way to identify whether a free variable is actually 
 a locale constant?

Note that testing whether a variable is a locale constant is not
sufficient. The same problem occurs with structured Isar proofs, even
without locales:

lemma
  assumes xs = []
  shows xs = ys == ys = []
apply safe
(* goal ys = [] is now unsolvable *)

The real test for a free variable must be, Does the proof context
contain any assumptions involving this variable? If the answer is no,
then it is safe to eliminate the equality. If the answer is yes, then
the equality must be kept in the assumptions. (It could be tried again
with the opposite orientation, though; in my example above, unfolding
xs = ys is unsafe, but unfolding ys = xs would be OK.)

I'm not sure how to implement this test on the proof context, but I
suspect that the code already exists -- doing a forall-introduction on
a theorem involves the same kind of check, doesn't it?

- Brian
___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] ML tactic for eta-contracting a goal

2010-05-22 Thread Brian Huffman
I think I just found answer to my question:

fun eta_tac i = CONVERSION Thm.eta_conversion i

I had never used CONVERSION until now. I guess that what it does is
apply the conversion to the entire subgoal?

- Brian

On Sat, May 22, 2010 at 12:07 PM, Brian Huffman bri...@cs.pdx.edu wrote:
 Does a simple tactic exist for performing eta-contraction on a goal,
 at the ML level?

 I need to use explicit eta-contraction in some internal proof scripts
 for the HOLCF fixrec package, to avoid the weird interactions with
 eta-contraction and simplifier congruence rules that I have complained
 about recently.

 I found that apply (subst refl) will eta-contract the current goal,
 and so currently I am using the following function which does the ML
 equivalent:

 fun eta_tac (ctxt : Proof.context) : int - tactic =
  EqSubst.eqsubst_tac ctxt [0] @{thms refl};

 But this code is far from self-explanatory, and it seems like there
 must be a simpler, more direct way to do this. In particular, it
 doesn't seem like eta_tac should need to take a Proof.context as an
 argument.

 - Brian

___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


[isabelle-dev] construction of the real numbers

2010-05-10 Thread Brian Huffman
Hello all,

I recently finished a formalization of real numbers that uses Cauchy
sequences. I thought it might be good to replace the current
formalization (which uses Dedekind cuts) with this new one in time for
the upcoming Isabelle release.

Users should not need to know or care which construction of reals we
use. The primary advantage of the new formalization is its size: It is
shorter than the current development of the reals by about a thousand
lines of code. It reduces the size of the Isabelle/HOL heap image
(without proof objects) by almost a megabyte.

I would propose to move the current development of the reals into an
example theory, say HOL/ex/Dedekind_Real.thy.

Are there any objections or comments?

- Brian
___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] construction of the real numbers

2010-05-10 Thread Brian Huffman
On Mon, May 10, 2010 at 12:41 AM, Walther Neuper neu...@ist.tugraz.at wrote:
 A comment you ask for: Your assumption, that users should not need to know
 or care which construction of reals we use, does not apply in education.

 There are several attempts to design educational software for (pure and
 applied) mathematics, which is _transparent_ in the sense, that the
 learner can access any knowledge underlying a particular calculation.

Hi Walther,

I completely agree with your comments. I guess I really meant to
distinguish the users of Isabelle theories from readers or
students of those theories. It is for the benefit of this second
group that the theory of Dedekind real numbers should be kept
available. I have personally learned a lot of new mathematics by
studying formalizations in Isabelle, and I hope that later students
will benefit by having two alternative constructions of the real
numbers to study.

- Brian
___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] construction of the real numbers

2010-05-10 Thread Brian Huffman
On Mon, May 10, 2010 at 3:59 AM, Makarius makar...@sketis.net wrote:
 This means there are about 1-2 weeks left for bigger changes.  Do you manage
 to exchange the reals by then?

I can do it today. I have already exchanged the reals and tested
everything in a local clone of the Isabelle repo. I haven't created
Dedekind_Real.thy yet, but that shouldn't take long.

 How is the timing for your theories?  Is there an impact on AFP, for
 example.  The latter can be taken as an indication how it will affect other
 users out there.

The new development no longer includes separate lemmas like
real_mult_assoc, real_le_linear, etc -- these are proved directly
within the instance proofs. Having to replace these theorem names with
the generic ones like mult_assoc and linear is the only effect on
user theories that I've seen. I could add a list of legacy theorem
names (like what Int.thy has) for backward compatibility.

There are no theories in the distribution of the AFP that use type
preal. If there are any end-users that use this type, well, they can
define it themselves in terms of type real if they really want to.

- Brian
___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


[isabelle-dev] Bug in linordered_ring_less_cancel_factor simproc

2010-05-09 Thread Brian Huffman
Hello,

With tracing turned on, one can see that the following lemma is solved
with the help of the linordered_ring_less_cancel_factor simproc (which
is defined in HOL/Tools/numeral_simprocs.ML):

lemma RR: (0::'a::linordered_idom)  z == (x*z  y*z) = (x  y)
by simp

Similarly, the following lemmas are solved by the same simproc:

lemma RL: (0::'a::linordered_idom)  z == (x*z  z*y) = (x  y)
by simp

lemma LR: (0::'a::linordered_idom)  z == (z*x  y*z) = (x  y)
by simp

But this last combination doesn't work:

lemma LL: (0::'a::linordered_idom)  z == (z*x  z*y) = (x  y)
by simp (* empty result sequence *)

The situation with linordered_ring_le_cancel_factor is similar (just
replace  with = in the conclusion of each lemma). The same
problem probably occurs with other cancellation simprocs as well, but
I haven't tried them all.

After some investigation, I have concluded that the problem has to do
with Arith_Data.prove_conv. Here is the relevant code from
HOL/Tools/arith_data.ML:

fun prove_conv_nohyps tacs ctxt (t, u) =
  if t aconv u then NONE
  else let val eq = HOLogic.mk_Trueprop (HOLogic.mk_eq (t, u))
  in SOME (Goal.prove ctxt [] [] eq (K (EVERY tacs))) end;

fun prove_conv tacs ctxt (_: thm list) = prove_conv_nohyps tacs ctxt;

I believe the t aconv u test is to blame. Arith_Data.prove_conv is
called to move the common term to the left, so it matches the rules
mult_less_cancel_left_pos or mult_less_cancel_left_neg. But if the
common term is already on the left (as is the case with lemma LL) the
aconv test ensures that the simproc will fail.

Here is the changeset that introduced the aconv test:
http://isabelle.in.tum.de/repos/isabelle/rev/7cdcc9dd95cb

I'm not sure why the aconv test is there, so I don't want to remove
it myself. Could someone else please look into this?

- Brian

P.S.

I have a couple of other complaints about the situation with simprocs.
Testing simprocs is much more difficult than it should be.

First of all, the ML antiquotation @{simproc
linordered_ring_less_cancel_factor simproc} doesn't work. Why? The
simproc is listed by name when I ask ProofGeneral to print out the
simpset. With no way to refer to the existing simproc from ML, I had
to cut-and-paste a copy of all the simproc ML code in order to test it
individually.

Also, we really need to have some proper regression tests for
simprocs. Currently the only tests for these simprocs are in a
commented-out (!) section of Tools/numeral_simprocs.ML.
___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


[isabelle-dev] Use of global simpset by definition packages

2010-05-07 Thread Brian Huffman
Hello all,

I am looking for some advice on a design decision for the HOLCF fixrec package.

Fixrec uses the simplifier when doing its internal proofs of the
defining equations for a function. Currently it maintains its own
special simpset for this purpose, which users can extend by declaring
theorems with the [fixrec_simp] attribute. The alternative would be to
get rid of [fixrec_simp] and just use the main simpset from the theory
context (i.e. theorems declared with [simp]).

As I recall, the old recdef package maintained its own special simpset
of recdef_simp rules for doing termination proofs. The current design
of fixrec is modeled after this style. I guess the advantage is that
the result of a fixrec command is more predictable, and its success
or failure is not so sensitive to changes in [simp] declarations in
other theories.

On the other hand, if I understand correctly it seems that the
function package makes a different design choice, and uses the
standard simpset for some things (like termination proofs).

So now I'm not so sure which way is better. Maybe maintaining a
separate simpset (and requiring users to know about the fixrec_simp
attribute) is more trouble than it's worth. Using the standard simpset
might introduce some extra hidden dependencies between [simp]
declarations and later fixrec definitions, but changing the simpset
does not change the theorems produced by fixrec, so maybe this is not
much of a drawback.

What do you all think?

- Brian
___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Use of global simpset by definition packages

2010-05-07 Thread Brian Huffman
Hi Alex,

Thanks for the info about the function package. I'll give a bit more
explanation of the proofs that fixrec does, using an example:

domain 'a LList = LNil | LCons 'a (lazy 'a LList)

fixrec mapL :: ('a - 'b) - 'a LList - 'b LList where
  mapL$f$LNil = LNil
| x ~= UU == mapL$f$(LCons$x$xs) = LCons$(f$x)$(mapL$f$xs)

(Definedness side-conditions like x ~= UU are needed with strict
constructors.)

First fixrec defines the constant mapL as a least fixed-point.

(1) mapL == fix$(LAM mapL f z. ... big case expression ... mapL ...)

Next fixrec proves that the functional from the definition is
continuous. This part uses user-supplied continuity rules (attribute
cont2cont).

(2) cont (%mapL. LAM f z. ... big case expression ... mapL ...)

The continuity lemma is used to prove the unfolding rule:

(3) mapL = (LAM f z. ... big case expression ... mapL ...)

Finally, the unfolding rule is used to prove the original equations.
This part uses the user-supplied fixrec_simp rules. The unfolding rule
is substituted on the LHS, and then the resulting goal is solved using
the simplifier.

(4) mapL$f$LNil = LNil
(5) x ~= UU == mapL$f$(LCons$x$xs) = LCons$(f$x)$(mapL$f$xs)

The fixrec_simp rules include rules related to the constants that make
up the big case expression. The rules for case combinators of strict
constructors have definedness side-conditions, so definedness rules
for constructors are also declared [fixrec_simp]. Finally, proving the
equations also involves continuous beta-reduction, so the [cont2cont]
rules are included too.

On Fri, May 7, 2010 at 7:24 AM, Alexander Krauss kra...@in.tum.de wrote:
 - The termination prover uses the global simpset (clasimpset actually),
 since I explicitly want it to pick up lemmas from the user. In practice, I
 have never seen a termination proof break because of a bad simp rule
 declared by the user. However, there is alse the possibility to declare
 rules [termination_simp] that should be used *only* in termination proofs.
 So the termination prover uses a superset.

In practice, fixrec_simp always seems to be a *subset* of the global
simpset. Fixrec needs simp rules for continuity, case combinators, and
definedness; such rules are always useful as global simp rules too.

 - I think one should be particularly careful when the actual result of some
 package (like the format of some rules) and not just success or failure is
 influenced by declarations. [fundef_cong] is such a case. In this case, it
 should really be a separate declaration, because otherwise the dependencies
 get really strange.

This is not the case with fixrec. User-supplied rules do not affect
the formulation of the generated theorems, although they may determine
success or failure.

In conclusion, I think maybe I should get rid of [fixrec_simp] and
just use the global simpset to solve the continuity goals and reduce
case expressions. Based on your experience with the function package,
I expect that bad simp rules would be very unlikely to break the
internal proofs. I think that definitions are much more likely to fail
due to users declaring an important rule [simp] but not declaring
[fixrec_simp].

- Brian
___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] top and bot

2010-05-03 Thread Brian Huffman
On Mon, May 3, 2010 at 4:55 AM, Florian Haftmann
florian.haftm...@informatik.tu-muenchen.de wrote:
 class top = preorder +
   fixes top :: 'a
   assumes top_greatest [simp]: x \le top

 class bot = preorder +
   fixes bot :: 'a
   assumes bot_least [simp]: bot \le x

 The fact that both class predicate and class operation have the same
 base name is indeed confusing.  Meanwhile it should be possible to
 insert a non-mandatory qualifier into the name binding of the class
 predicate, e.g. pred, or even class.  Any objections?

Let me see if I understand your proposal correctly.

Currently, the declaration for class top defines two constants:

class predicate: Orderings.top
overloaded constant: Orderings.top_class.top

In your proposed scheme, these would be changed to

class predicate: Orderings.pred.top
overloaded constant: Orderings.top_class.top

So in the new system, term top would still print out as
top_class.top. But it would be possible to write pred.top to
unambiguously refer to the class predicate (which is not currently
possible), and in turn this would enable us to use the hide_const
like this:

hide_const (open) Orderings.pred.top

which would make top just print out as top. Is this right?

It sounds like a sensible idea to me, although we should make sure we
understand how it would affect the situation with class finite. It
would be best if any change we make avoids ugly workarounds in all
situations (top/bot as well as finite).

- Brian
___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


Re: [isabelle-dev] Function package produces duplicate rewrite rule warnings

2010-05-03 Thread Brian Huffman
The datatype package produces some extra warning messages now too:

### Ignoring duplicate rewrite rule:
### True == induct_true

### Ignoring duplicate rewrite rule:
### False == induct_false

I haven't checked, but I would be willing to bet that this behavior
was introduced by the same changeset as the function package warning
messages. I wouldn't be surprised if warnings pop up in other packages
as well.

- Brian


On Mon, May 3, 2010 at 12:48 PM, Alexander Krauss kra...@in.tum.de wrote:
 Hi Brian,

 Thanks for bisecting this, I'll have a look.

 Alex

 Brian Huffman wrote:

 Recent versions of the function package produce some unhelpful warning
 messages about duplicate rewrite rules:

 ### Ignoring duplicate rewrite rule:
 ### Sum_Type.Projl (Inl ?y) == ?y

 ### Ignoring duplicate rewrite rule:
 ### Sum_Type.Projr (Inr ?y) == ?y
___
Isabelle-dev mailing list
Isabelle-dev@mailbroy.informatik.tu-muenchen.de
https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev


[isabelle-dev] Sane Mercurial history

2010-03-03 Thread Brian Huffman
On Wed, Mar 3, 2010 at 8:54 AM, Makarius makarius at sketis.net wrote:
  * For longer projects the timespan of staying in splendid isolation
is limited to 1-3 days (3 is already quite long, and personally I
usually try to stay within the 0.5-1.5 days interval).
...
Of course, the merge nodes produced for preparing a push also need to
be fully tested (isabelle makeall all).

Note that these requirements are somewhat in conflict with each other.

Running all the tests with isabelle makeall all already takes me
nearly 0.5 days, during which time it is very likely that the
repository will have changed again. If I required myself to run
isabelle makeall all after merging and immediately before each push,
I might be able to push my changes once or twice a week, during
periods of low activity on the repository.

The alternative is to take shortcuts on testing. Here's what I usually do now:
1. pull and merge
2. make edits and local commits
3. compile and test everything on a separate machine
... hours pass ...
4. pull and merge again if necessary
5. check that there were no non-trivial file merges
6. push

I think this is a reasonable compromise between thoroughness of
testing vs. timeliness of pushing.

- Brian


[isabelle-dev] Document preparation failure

2010-02-19 Thread Brian Huffman
On Fri, Feb 19, 2010 at 8:18 AM, Rafal Kolanski rafalk at cse.unsw.edu.au 
wrote:
 Looks like it should be @{const apply_rsp} or @{text apply_rsp} or
 something. The underscore is throwing it.

That's exactly it. There's actually a second problem of the same kind
later on, in a subsection heading. I have a fix on my local
repository, and I'll push it as soon as I can rebuild the HOL-Main
image (assuming it works!)

- Brian


[isabelle-dev] added 605 changesets with 1325 changes to 175 files

2010-01-15 Thread Brian Huffman
From my most recent hg pull:

pulling from ssh://huffman at 
atbroy100.informatik.tu-muenchen.de//home/isabelle-repository/repos/isabelle
searching for changes
adding changesets
adding manifests
adding file changes
added 605 changesets with 1325 changes to 175 files

Wow! What just happened?

- Brian


[isabelle-dev] Isabelle/HOL axiom ext is redundant

2009-11-13 Thread Brian Huffman
On Thu, Nov 12, 2009 at 2:06 PM, Andreas Schropp schropp at in.tum.de wrote:
 In a sense, eq_reflection is a meta-thereom, but Pure is not a meta-logic,
 so it cannot be proven within the system. ?Thus we need to add it as an
 axiom.

 But we can prove all instances of the eq_reflection propagation theorem,
 right?

 I would consider this as an implementation detail, and not worry about
 it any further.

 Tools that operate on the level of proof terms can then eliminate
 eq_reflection, as was pointed out first by Stefan Berghofer some years ago,
 IIRC. ?(I also learned from him that it is better to speak about Pure as
 logical framework, and not use term meta logic anymore.)


 Yeah, Stefan does this in HOL/Tools/rewrite_hol_proof.ML (which was tested
 quite thoroughly since 5e20f9c20086, so also in Isa09).

This is very interesting... I'd like to learn more about the status of
eq_reflection as a meta-theorem. Is there a paper about this?

- Brian


[isabelle-dev] Isabelle/HOL axiom ext is redundant

2009-11-12 Thread Brian Huffman
On Thu, Nov 12, 2009 at 1:12 AM, Lawrence Paulson lp15 at cam.ac.uk wrote:
 If you do these things, you put an end to all Isabelle logics other than
 Isabelle/HOL. Remember, an object logic does not need to possess an equality
 symbol or even an implication symbol.

OK, good point. All the Isabelle logics I have really looked at (HOL,
ZF, FOL, LCF) have implication, equality, and an eq_reflection axiom.
For these, it might make sense to formalize them by adding operations
and axioms about type prop, instead of adding a new type bool or
o. But I can see that this wouldn't work so nicely with logics
without equalities or implication.

 Having just translated some lengthy, incomprehensible HOL proofs into
 Isabelle, I appreciate more than ever the distinction between the meta- and
 object- levels. HOL proofs are cluttered with extra steps to manipulate
 implications because HOL has no other way to express the dependence of a
 fact upon conditions.
 Larry

Right, having two kinds of implication (-- and ==) is convenient
because (==) is used to encode subgoals with premises in apply-style
Isabelle proofs. But this justification for having two implications is
completely separate from the one you mentioned above, isn't it? With
declarative Isar proofs, I don't think the distinction between -- and
== is nearly as important, because users don't see so many subgoals
encoded with (==). HOL's problems with manipulating implications
would probably be helped by having declarative Isar-style proofs too.

- Brian


[isabelle-dev] Isabelle/HOL axiom ext is redundant

2009-11-11 Thread Brian Huffman
Hello all,

This morning I was looking at the following comment from HOL.thy:

  ext:(!!x::'a. (f x ::'b) = g x) == (%x. f x) = (%x. g x)
-- {*Extensionality is built into the meta-logic, and this rule expresses
 a related property.  It is an eta-expanded version of the traditional
 rule, and similar to the ABS rule of HOL*}

Then I was wondering exactly how this axiom was related to the
extensionality built into the meta-logic. After looking into it, I
discovered that I could actually prove the ext axiom as a theorem. I
would suggest that we add lemma meta_ext to Pure.thy, and then put a
proof of rule ext in HOL.thy.

- Brian


lemma meta_ext:
  fixes f g :: 'a::{} = 'b::{}
  shows (!!x. f x == g x) == (%x. f x) == (%x. g x)
by (tactic {*
  let
val a = @{cterm !!x. f x == g x};
val t = @{cterm t::'a};
val thm1 = Thm.forall_elim t (Thm.assume a);
val thm2 = Thm.abstract_rule x t thm1;
val thm3 = Thm.implies_intr a thm2;
  in
rtac thm3 1
  end
*})

lemma ext: (!!x::'a. (f x ::'b) = g x) == (%x. f x) = (%x. g x)
apply (rule meta_eq_to_obj_eq)
apply (rule meta_ext)
apply (rule eq_reflection)
apply (erule meta_spec)
done


[isabelle-dev] Isabelle/HOL axiom iff is redundant

2009-11-11 Thread Brian Huffman
Hello again,

While I'm on the subject of redundant axioms, consider this piece of HOL.thy:

axioms
  iff:  (P--Q) -- (Q--P) -- (P=Q)
  True_or_False:  (P=True) | (P=False)

Has anyone else noticed that axiom True_or_False implies axiom
iff? (You can just do a proof by cases on P and Q.) I actually did
this proof within a modified HOL.thy (some lemmas need to be proved in
a different order, but the bootstrapping still works).

I'm guessing that the origins of this redundancy are historical---I
suppose that True_or_False was probably introduced later in the
development so that intuitionistic lemmas could be kept separate from
the classical ones.

- Brian


[isabelle-dev] Bug Tracking

2009-07-09 Thread Brian Huffman
I think it's time to bring up the issue of bug tracking again. I just
replied to Peter Lammich on the Isabelle users list about a bug in the
locale package that has been known to exist for at least a year and a
half; it is unclear whether anyone has taken any responsibility for
fixing the bug, or indeed whether there is any plan to fix it at all.
A bug tracking system would have been very useful in this case.

Anyway, I just wanted to express my agreement and support for what
Tjark said earlier about the need for a bug tracking system.

- Brian

On Thu, Mar 12, 2009 at 1:05 PM, Tjark Weberwebertj at in.tum.de wrote:
 The various recent bug reports reminded me that the current way of
 tracking bugs for Isabelle seems archaic to me. ?The Isabelle mailing
 lists clearly have their value, but how about a proper bug tracking
 system (such as Bugzilla) for tracking bugs and feature requests?

 The benefits could be numerous: a bug tracker would provide an overview
 of development requests and their status; bugs could be assigned to
 developers; classified by priority/severity; related to repository
 versions; etc.

 Choosing the best bug tracker may not be trivial, but using none seems
 far from optimal to me. ?Anyway, just my 2 cents.

 Regards,
 Tjark

 ___
 Isabelle-dev mailing list
 Isabelle-dev at mailbroy.informatik.tu-muenchen.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev




[isabelle-dev] Pending sort hypotheses

2009-07-01 Thread Brian Huffman
On Wed, Jul 1, 2009 at 4:46 AM, Amine Chaiebac638 at cam.ac.uk wrote:
 I wonder why this restriction has been introduced in the first place. Is
 it because sorts can be empty (i.e. there are no possible type
 instances)?

That's exactly right. Theorems need to have sort hypotheses to prevent
proofs like this:

class impossible =
  assumes impossible: EX x. x ~= x

lemma False: False
proof -
  obtain x :: 'a::impossible where x ~= x
using impossible ..
  then show False by simp
qed

The above script fails at qed with a pending sort hypothesis error.

 For sorts with instances it should even be logically correct
 to eliminate such sorts constraints, when they are introduced
 exclusively during the proof.

When possible, they *are* removed during the proof as far as I can tell.

Here's an example of a proof that fails due to a pending sort hypothesis:

class nontrivial = assumes nontrivial: EX x y. x ~= y

lemma True
proof -
  obtain x y :: 'a::nontrivial where x ~= y
using nontrivial by auto
  then show True by simp
qed

Isabelle can't remove the [nontrivial] sort hypothesis because it
doesn't know whether or not class nontrivial has any instances.

But if we declare the following subclass relationship...

instance zero_neq_one  nontrivial
proof
  from zero_neq_one [where 'a='a]
  show EX x y::'a. x ~= y by blast
qed

...and run the above proof script again, then it will work. Since
Isabelle already knows there are instances of class zero_neq_one, now
it knows that class nontrivial is inhabited, and it can discharge the
sort hypothesis.

- Brian


[isabelle-dev] repos integrity

2009-06-18 Thread Brian Huffman
On Thu, Jun 18, 2009 at 10:55 AM, Makariusmakarius at sketis.net wrote:
 ?* The main Isabelle repository http://isabelle.in.tum.de/repos/isabelle
 ? ?always needs to be in a state where

 ? ? ?isabelle makeall all

 ? ?finishes succesfully!

This last failure was due to some changes that I pushed recently.

Lately I have been using isabelle makeall all consistently before
each push. This most recent problem was due to my misunderstanding of
how isabelle makeall all works. I managed to get into a state where
I believed that all the tests had been run, when really they hadn't.
Let me explain in more detail.

I had noticed that when using isabelle makeall all, if one of the
build targets fails, the others will still be run. For example, even
after HOL-ex fails, HOLCF will still be compiled and tested. I was
very happy to see this feature, because for me, running the full test
suite is an overnight process (HOL-Nominal-Examples alone takes over 4
hours) and it would be very frustrating to have an early error prevent
several hours' worth of other later tests from being run.

In this case, after running makeall all overnight, I found that
HOL-ex was the only failure: CodegeneratorTest was failing because of
some missing [code del] declarations. I added the [code del]
declarations, and then compiled HOL-ex successfully. At this point, I
was under the impression that the entire repository had undergone the
appropriate tests. I pushed my changes.

Of course, I now know that makeall all doesn't really work how I
thought it did. If one test fails, it actually skips some subsequent
tests, and continues with others. If it actually worked the way I
expected it to, makeall all would be a much more useful tool for
testing.

- Brian


[isabelle-dev] repos integrity

2009-06-18 Thread Brian Huffman
On Thu, Jun 18, 2009 at 2:56 PM, Makariusmakarius at sketis.net wrote:
 isabelle makeall TARGETS merely runs isabelle make TARGET in every
 logic directory -- this is a very ancient arrangement. ?This means it
 stops after the first failure in each directory, but tries the other
 directories afterwards. ?Since HOL and HOLCF are different logics in that
 sense, you get the above effect of continued HOLCF testing after some HOL
 session failed.

A quick Google search found the following bit of documentation about gnu make:

http://www.gnu.org/software/automake/manual/make/Errors.html

When an error happens that make has not been told to ignore, it
implies that the current target cannot be correctly remade, and
neither can any other that depends on it either directly or
indirectly. No further commands will be executed for these targets,
since their preconditions have not been achieved.

Normally make gives up immediately in this circumstance, returning a
nonzero status. However, if the `-k' or `--keep-going' flag is
specified, make continues to consider the other prerequisites of the
pending targets, remaking them if necessary, before it gives up and
returns nonzero status. For example, after an error in compiling one
object file, `make -k' will continue compiling other object files even
though it already knows that linking them will be impossible.

The usual behavior assumes that your purpose is to get the specified
targets up to date; once make learns that this is impossible, it might
as well report the failure immediately. The `-k' option says that the
real purpose is to test as many of the changes made in the program as
possible, perhaps to find several independent problems so that you can
correct them all before the next attempt to compile.

It looks like isabelle makeall all -k will do exactly what I want.
Maybe the makeall script should be changed to use this option by
default?

- Brian


[isabelle-dev] More Mercurial hints

2009-03-04 Thread Brian Huffman
Quoting Makarius makarius at sketis.net:

 I've found some nice illustrations on the web ...

 Mercurial:  http://tinyurl.com/cxrbjt
 CVS:http://tinyurl.com/dk3gsz

So, you're saying that with Mercurial, you can do a lot more damage?

;-)

- Brian





[isabelle-dev] Suc 0 necessary?

2009-02-23 Thread Brian Huffman
Quoting Tobias Nipkow nipkow at in.tum.de:

 This is exactly the point: recursive functions defined by pattern
 matching expect Suc. They tend to dominate the scene in CS-oriented
 applications. Hence Suc 0 is made the default. However, for math
 applications this tends to be inconvenient, esp in connection with
 abstract algebra involving 1.

But this is exactly my point: CS-oriented users, who define a lot of  
recursive functions by pattern matching on Suc, can use Suc 0.  
Math-oriented users can use 1. Users can choose which style they want  
to use.

To support this separation of Suc 0 and 1, Nat.thy would probably need  
to have two versions of some lemmas, one in each style; but this  
should not be difficult.

The real problem that I can see is that a lot of CS-oriented users  
have gotten used to writing 1 as shorthand for Suc 0. Currently  
they can get away with it, because it is expanded by the simplifier.

 The original posting by Chris Capel merely aimed at readability: Suc
 0 is less pleasant than 1. An alternative we discussed but never
 agreed on is to abbreviate Suc 0 to #1. This would merely be new
 surface syntax and not help with the algebraic 1, but it may already
 satisfy some people.

I think abbreviations like this could really help. Being able to write  
#1 for Suc 0 means that users won't have to write 1 for Suc 0  
solely for the sake of brevity.

- Brian




[isabelle-dev] Suc 0 necessary?

2009-02-23 Thread Brian Huffman
Quoting Tobias Nipkow nipkow at in.tum.de:

 IIRC, 1 used to abbreviate Suc 0. Making 1 = Suc 0 a simp rule
 mimiced that.

Aha! The real reason comes out. Now things are starting to make sense...

 It allows you to write 1 on the rhs of an equation
 because (if used as a simp rule) the 1 will be replaced by Suc 0. Of
 course this does not work on the lhs...

Yes, this is a real problem. There are even simp rules in the  
distribution that will never fire because they have 1::nat on the lhs.

 Even if we did not make 1 = Suc 0 a simp rule, we would still need to
 decide how to phrase the library theorems. This would still bias the
 user.

This remains to be seen; I think the library theorems could probably  
remain agnostic about 1 vs. Suc 0. Theorems could come in both styles,  
and each theorem would ensure that the representation is preserved. I  
might have to try this out, and see how well it works. Actually, I  
suppose it wouldn't hurt to make sure the library theorems follow this  
policy, even with 1 = Suc 0 [simp] in place.

- Brian




[isabelle-dev] Bug report: code generation for eq with indirect-recursive datatypes

2009-02-23 Thread Brian Huffman
Quoting Florian Haftmann florian.haftmann at informatik.tu-muenchen.de:

 Hi Brian,

 The following code works:

 datatype 'a bintree = Branch 'a bintree 'a bintree | Tip 'a
 definition test1 = (Tip True = Branch (Tip False) (Tip True))
 export_code test1 in Haskell file -

 but this other example doesn't:

 datatype 'a tree = Node 'a 'a tree list
 definition test2 = (Node True [] = Node True [Node False []])
 export_code test2 in Haskell file -

 Instead it fails with:

 *** exception UNDEF raised
 *** At command export_code.

 If you pull from the repository it shall be gone ;-).  Thanks for
 reporting this.

Hi Florian,

Thanks for the quick response! The Haskell code generation works now  
for both examples.

However, if I try to generate OCaml or SML code, the second one still  
fails, but with a different error this time:

*** Illegal mutual dependencies: tree :: eq, eq_class.eq [tree]
*** At command export_code.

What do you think?

- Brian




[isabelle-dev] A better policy for the typerep class

2009-02-11 Thread Brian Huffman
Quoting Florian Haftmann florian.haftmann at informatik.tu-muenchen.de:

 a) Keeping theory imports minimal vs. reducing the number of merges

 Careful thinking what the logical preliminaries of a particular theory
 are is a necessary, valuable and fruitful task -- I would by no means
 encourage to develop the HOL theories as a structureless coagulation.

 That, in order to reduce the number of merges, there are some
 recommended wasp-waists in the HOL theory dependencies, does only
 superficially conflict with this: add the appropriate import (Plain,
 Main) to the theory if necessary, and the job is done.  When a
 rearranging of theories is to be done, the proper minimal imports are
 reconstructible quite easily -- in the worst case using an interactive
 session.

I'm afraid I don't understand why reducing the number of merges by  
adding wasp-waists is beneficial. From what I understand, adding  
additional theory dependencies could only reduce opportunities for  
parallel execution. Is there another technical reason why the amount  
of merging needs to be limited?

- Brian




[isabelle-dev] A better policy for the typerep class

2009-02-10 Thread Brian Huffman
Hello,

I have been thinking about all the conflicts caused by the typerep  
class, and what should be done about it.

Basically, this is an instance of a more general problem. The Isabelle  
libraries define some types, and they also define some type classes.  
The problem, stated simply, is where should the class instances go?

The Haskell community has been dealing with this issue for a long  
time. The Haskell standard libraries are extensive, with a much wider  
variety of types, classes, and instances than Isabelle's libraries  
have. Yet they have done a much better job managing all of that  
complexity than we are doing at the moment with the typerep class.

We would do well to learn some things from the Haskell community about  
organizing class instances in libraries, specifically:

1) Exactly what problems arise when standard libraries don't have  
class instances set up properly, and

2) What policies actually work to avoid such problems.

I'll start with point 1. Here's the scenario: An Isabelle user imports  
two theories, both of which are part of the standard distribution.  
Theory A defines a type A, and theory B defines a type class B, but  
there is no instance for A :: B anywhere in the distribution. The most  
immediate problem for the user is the lack of the class instance; to  
remedy this, he will have to provide it himself. For a class like  
typerep, this is actually really easy; the missing class instance is  
at most a minor inconvenience.

However, as the Haskell community long recognized, the real problem is  
still lurking ahead. If two different users (or developers of  
libraries) each provide their own class instances for A :: B, then  
their theories are now incompatible. They cannot be merged, and it is  
impossible for any third party to use their theories together at the  
same time. This is no minor inconvenience; it is actually impossible  
to overcome without modifying and restructuring the two theories  
involved.

The important point here is that *merging theories* is our main  
concern. Users can deal with missing instances, but if users do not  
have the ability to merge theories, then our efforts have failed.

Keeping this in mind, it is clear that Florian's new feature that  
automatically generates typerep class instances is seriously  
misguided. It solves the minor inconvenience of missing instances, but  
at the same time, it totally destroys all future potential for merging  
theories. I'm sure that Florian had the best intentions, but his  
automatic-instance mechanism is a complete failure with respect to the  
deeper problem caused by missing instances - namely, the inability to  
merge theories.

OK, now for point 2. What should our policy be? Clearly, if the  
standard library defines a type A and a class B, and a suitable  
instance for A :: B exists, then the instance must also be included  
somewhere in the standard library. In the case of the typerep class,  
this means every type defined in the Isabelle/HOL image must also have  
a typerep instance in the Isabelle/HOL image.

The next question is where (within the distribution) should the  
instances go? In the Haskell libraries, the policy is that every  
instance should reside in one of two places: 1) in the same module as  
the type definition, or 2) in the same module as the class definition.  
This is a reasonable policy, and it would probably work for Isabelle.  
However, it would also be reasonable in Isabelle to allow instances to  
reside in a theory separate from either the type or class definitions.  
(This situation is discouraged in Haskell for performance reasons with  
GHC.) Following the Haskell policy tends to create a linear theory  
dependency graph (with lots of wasp-waists), so it might be better  
to split things up more to create more opportunities for parallelism.  
Policy decisions like this should be discussed further.

- Brian





[isabelle-dev] A better policy for the typerep class

2009-02-10 Thread Brian Huffman
I thought it would be good to clarify my position in a shorter email:

The automatically-generated typerep instances work fine on typedef or  
datatype commands, where the instance resides in the same theory as  
the type definition.

It is also OK if instances are generated *within Typerep.thy* for all  
types defined in earlier theories.

However, if the merging of two theories causes a typerep instance to  
be generated, this is *very bad*.

As far as I can tell, Florian would not necessarily disagree with  
this. His advice to Amine, recommending to contort his theory  
dependencies, has the effect of avoiding theory merges which would  
cause typerep instances to be generated.

My advice is to simply disable theory-merge instance generation. This  
would be much better than the current workaround, which relies on  
carefully preventing this feature from ever being exercised.

If, during a theory merge, the typerep package noticed a missing  
instance, printing an error message would actually be more helpful  
than the current behavior.

- Brian




[isabelle-dev] AC simplification or theory merge?

2009-01-30 Thread Brian Huffman
Hi Amine,

In the interest of making your proof scripts more robust, I think it  
is best to avoid relying on the term ordering whenever possible.  
Specifically, if you have a tactic like (simp add: mult_ac) that does  
not completely solve the subgoal, and the following tactic relies on  
the resulting terms being in a specific order, then this is a problem.

In this situation you should break up your proof using the  
transitivity reasoner, so that AC rewriting is only used in situations  
where it completely solves the subgoal it is given.

Hope this helps,

- Brian

Quoting Amine Chaieb ac638 at cam.ac.uk:

 I managed now to get all the proofs done. Apparently the term ordering
 changes if I don't merge...

 Amine.

 Amine Chaieb wrote:
 Dear all,
 I am trying to integrate some development about generalized   
 factorials and generalized binomial coefficients. The theory alone   
 is working fine.
 Then I decided to put all the lemmas in Binomial.thy since in this   
 case a new file is not really necessary.
 It roughly looks like this:

 Binomial imports Plain ~~/src/HOL/SetInterval

 and

 Pochhammer imports Fact Binomial Presburger

 Fact imports Nat

 *In this configuration all my proofs go through!!*

 So I replaced the import section by the union of these two, added   
 my proofs at the end of the old file: My proofs broke down.


 It roughly looks like:

 Binomial imports Plain ~~/src/HOL/SetInterval Fact Presburger
 Content of old Binomial
 Content of Pochhammer
 end

 *In this configuration many of my proofs are broken!!*

 I suspected the order in which theories are loaded, so I tried the   
 several combinations (4!) but this does not help. Investigating   
 more closely, it seems that rewriting with commutativity does not   
 work properly (I am sure of this). I have e.g. proved a statement   
 x*y = z and then want the other version y*x = z, the proof text  
  by (simp only: mult_commute my_theorem) does not work, I had to  
 use  subst. This is a very simple instance, but there are proofs  
 wich  involve many many factors, so I can not afford to do these by  
 hand.

 Why does this happen? Is this the merge again? Some other   
 mechanism? I also replaced ring_simps/field_simps by algebra_simps   
 but this does not help. I think something is wrong with the   
 term_ordering when I don't merge???

 Best wishes,
 Amine.
 ___
 Isabelle-dev mailing list
 Isabelle-dev at mailbroy.informatik.tu-muenchen.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev
 ___
 Isabelle-dev mailing list
 Isabelle-dev at mailbroy.informatik.tu-muenchen.de
 https://mailmanbroy.informatik.tu-muenchen.de/mailman/listinfo/isabelle-dev





  1   2   >