Dear Makarius Wenzel, dear Lawrence C. Paulson,
dear Members of the Research Community,

Catching up with the email discussion of the last weeks, I feel the need to 
comment on some statements by Makarius Wenzel and by Lawrence C. Paulson, e.g., 
Wenzel's claim:

"The assumption that any of the LCF-style provers is 100% correct is
wrong -- that was never the case in last decades. If your work depends
on that assumption, you need to rethink what you are doing."
        
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-December/msg00075.html

In other contributions, similar statements are made. For example, in the 
German-language presentation at
        https://youtu.be/R8pyFwil9KY
from minute 30:18 on Wenzel argues that John Harrison's HOL Light has a 
provably safe kernel, but is implemented in the unsafe programming 
language/environment OCaml, claiming more or less all that HOL implementations 
are subject to logical failures in the same way.

Paulson adds the "point of view of many mathematicians, [that] reliance on 
computers introduces too much uncertainty" also because of hardware failure due 
to physical influence ("cosmic rays and the like").
        
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-December/msg00080.html
This is not convincing to me, not only because the human brain is subject to 
physical influence, too, but such effects can by minimized by repeating the 
proof verification, which drastically reduces the likeliness of such errors 
(with exponential decay depending on the number of independent systems, I 
believe).


In general, the aim of these arguments seems highly problematic to me, as they 
seem to confuse violations of the LCF approach itself with minor technical 
questions.

Of course, depending on the choice of the programming language/environment, 
there may be ways to bypass restrictions guarding the LCF logical kernel. But 
these are technical implementation details. Not every logician wants to create 
and implement a new language for proof verification (however, I did with the R0 
implementation), and therefore many have to rely upon ML, Standard ML, OCaml or 
other software meta-languages, including their disadvantages such as offering 
the means to freely manipulate the kernel with some hack.
But when carrying out a proof, the author clearly knows whether he uses regular 
proof commands or some hacks of the software meta-language (e.g., ML or OCaml); 
moreover, such hacks can be easily identified in the proof script.

These implementation details are a completely different issue than a flaw in 
the logical kernel, as recently discovered in Isabelle/HOL.

For the same reason, the reference to Hugo Herbelin's email at
        
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-December/msg00077.html
is inappropriate, since he only refers to either such weaknesses that are 
technical implementation details as described above, or weaknesses where 
improvement attempts are already in progress (and which obviously have little 
to do with the formal language, i.e., the logic, itself).


The inconsistency in Isabelle/HOL found by Kuncar and Popescu in
        http://doi.org/10.1007/978-3-319-22102-1_16
        http://andreipopescu.uk/pdf/ITP2015.pdf
clearly is a flaw in the logical kernel, and not only a matter of some 
technical hack (exploiting some programming language/environment).

Tobias Nipkow expressed it very clearly: "This is not a minor issue with fonts 
but concerns and endangers the very core of what we are doing."
        
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-September/msg00083.html

If you believe that other HOL systems are affected in the same way, please 
provide the reference to papers on inconsistencies in that systems.
As of my knowledge, rather the opposite is the case, e.g.,
- HOL Light provides some self-verification:
        http://www.cl.cam.ac.uk/~jrh13/papers/holhol.pdf
- and HOL Zero offers a reward for finding soundness-related flaws:
        http://www.proof-technologies.com/holzero/unsoundnesses.html

But relativizing a flaw in the logical kernel itself by classifying it into the 
same category as a hack allowed by the technical implementation more or less 
ends up in giving up the LCF approach as a whole.


Similarly, I disagree with some statements of Lawrence C. Paulson:
        
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-August/msg00075.html
        
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-August/msg00089.html
I can understand that Paulson is not very happy about such an incidence, but 
denying that the flaw is an inconsistency is not helpful, since it very 
obviously has the two properties of a classical paradox: self-reference 
(circularity) and negativity (negation).
Moreover, the source code should be subject to scrutiny, of course, otherwise 
we cannot talk of a system conforming to the LCF approach.

Basically, I agree with the replies of Popescu and Kuncar:
        
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-August/msg00096.html
        
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2016-August/msg00097.html


In the same manner, I somewhat disagree with Tobias Nipkow's following 
statement from 2015 I recently stumbled across:
"The length of this dicussion puzzles me. Typedef can introduce 
inconsistencies, nobody wants this behaviour and nobody needs this behaviour. 
There is a perfectly clear way to rule out this whole class of unwanted 
behaviours: do not allow certain cycles. This solution, including code, has 
been on the table for a year now. By now not just mere users but also 
certification agencies are worried. This is a pressing issue and it is not 
rocket science."
        
https://lists.cam.ac.uk/pipermail/cl-isabelle-users/2015-September/msg00115.html

My impression is, that in the Isabelle group, the attitude is only that 
self-reference (circularity) is a problem, but no further analysis is made, 
including the comparison with other systems by trying to express the same 
phenomenon within the other systems, and the search for the exact reason for 
the paradox, which goes beyond just self-reference.
Self-reference in formal logic shows that some dependency is not properly 
resolved.
The proper way to correctly resolve these dependencies in order to avoid 
paradoxes is a way by means of the formal language (in the tradition of Russell 
and Church).

For this reason, I consider the approach by Kuncar and Popescu as a rather 
preliminary solution.
As far as I have looked at it, it seems to keep track of the dependencies in 
the background.
It may work, but the logician's desire for naturally expressing formal logic 
and mathematics would be using the means of the formal language to keep track 
of the dependencies (in order to avoid paradoxes). The same critique could be 
applied to the type correctness conditions (TCCs) of PVS, where correctness is 
checked only afterwards, instead of directly preventing the expression of 
incorrect terms (non-well-formed formulae / non-wffs) by means of the language.
In R0, the same problem was solved using syntactical means only: type 
abstraction, i.e., the binding of type variables with lambda.


In my opinion, the inconsistency in Isabelle/HOL created by axiomatic type 
classes (ad hoc overloading of constants) has three reasons:

1. The LCF approach is not implemented properly.

Clearly, in the documentation a strict distinction between the kernel and the 
rest is missing, not to speak of the inner kernel and the extension. In the 
original HOL system, the logic is a separate part (Part III: The HOL Logic 
[Gordon and Melham, 1993, pp. 191–232]), in current HOL4 a separate file 
(http://freefr.dl.sourceforge.net/project/hol/hol/kananaskis-10/kananaskis-10-logic.pdf).
 
Within the logic, the extension is easily identifiable (e.g., online 
as the section "2.5 Extensions of theories"). I have discussed this earlier in 
Oct. 2016:
"If one takes the LCF approach of a small trusted logical kernel for serious, 
the logic has to be described in a separate and clearly identifiable part, as 
the kernel should be a verified and therefore trusted, not a blindly trusted 
kernel.
Further, the extensions are clearly marked as 'extensions' in the table of 
contents, and the use of axioms ('Asserting an axiom', section 2.5.4 Extension 
by type definition) is clearly named"
        https://sourceforge.net/p/hol/mailman/message/35437586/

Similar as in the documentation, in the source code the distinction between the 
kernel and the rest seems to be missing, as otherwise the introduction of 
axiomatic type classes (ad hoc overloading of constants) could not have 
happened, as discussed in the following section.

2. The implementation of axiomatic type classes (ad hoc overloading of 
constants) in Isabelle/HOL introduces something between a constant and a 
variable, and hence clearly is not only an extension of the logic, but a 
modification of the inner kernel itself. In line 1 of example 2 in
        http://andreipopescu.uk/pdf/ITP2015.pdf (p. 2)
a constant c is declared, but specified later in line 3, making it practically 
a variable without imposing the restrictions on variables on it.
Hence the inner kernel is modified in such a way that the restrictions on 
variables can be bypassed.
In Q0 (which doesn't have type variables), these restrictions on variables are 
part of Rule R' [Andrews, 2002, p. 214].
In R0, type variables are introduced as variables (of type tau), which means 
that the restrictions on variables of Rule R' automatically cover the case of 
type variables, preventing such a paradox as the one in Isabelle/HOL by means 
of the language in the form a uniform treatment of both regular variables and 
type variables (since types are not a separate syntactic category anymore, but 
terms of type tau; hence, type variable are simply variables of type tau).
In HOL, these restrictions on type variables are represented by condition (iii) 
in both
- 2.5.1 Extension by constant definition
- 2.5.2 Extension by constant specification
in
        
http://freefr.dl.sourceforge.net/project/hol/hol/kananaskis-10/kananaskis-10-logic.pdf
I have compared the restrictions on type variables in HOL and Q0/R0 earlier at
        https://sourceforge.net/p/hol/mailman/message/35448609/
Note that in the logic part, Andy Pitts clearly recognizes the problem of 
inconsistency (paradoxes) as a matter of an (unresolved / unrecorded / 
untracked) dependency: "The problem is that the meaning of the right hand side 
of the definition varies with [the type variable] α [alpha]"
        
http://freefr.dl.sourceforge.net/project/hol/hol/kananaskis-10/kananaskis-10-logic.pdf,
 p. 34

3. The third reason for the inconsistency, on a more general level, is that 
with axiomatic type classes (ad hoc overloading of constants) the attempt is 
undertaken to express something in current Isabelle/HOL for which the means of 
the language are insufficient. In their papers, referenced in section "3 
Criticism" in
        http://doi.org/10.4444/100.111
Freek Wiedijk and John Harrison repeatedly point out that the HOL type system 
is too weak, causing "standard results [...] awkward to express".
In particular, the mechanism of type abstraction is missing, by which the 
dependency of type variables is kept track of (by the means of the formal 
language), which is implemented in R0 and discussed in part 5 of
        https://sourceforge.net/p/hol/mailman/message/35458077/
Unless the expressiveness of the formal language is enhanced, one can only use 
auxiliary means to prevent paradoxes (e.g., manually perform some bookkeeping 
of the dependencies in the background).
The proper solution would require a discussion of a further development of the 
formal language of Isabelle/HOL (and the HOL systems in general).

Andy Pitts has started such a discussion in the logic part: "[E]xplicitly 
recording dependency of constants on type variables makes for a rather 
cumbersome syntax which in practice one would like to avoid where possible."
        
http://freefr.dl.sourceforge.net/project/hol/hol/kananaskis-10/kananaskis-10-logic.pdf
 (p. 38)
Despite the overall excellent quality of the logic part, I disagree at this 
minor point.
Logical rigor clearly prevails over practical considerations.
This means that the syntactic features of the formal language definitely should 
provide the mechanism to record dependency information, following the approach 
of Russell and Church.
Such syntactic features are implemented in R0, as discussed above and in the 
remarks on section "2.5.3 Remarks about constants in HOL" at
        https://sourceforge.net/p/hol/mailman/message/35448609/


These three considerations are missing in the discussion and papers on 
Isabelle/HOL so far.
The exception is the current draft by Kuncar and Popescu, "Comprehending 
Isabelle/HOL's Consistency", available online at
        http://andreipopescu.uk/pdf/compr_IsabelleHOL_cons.pdf
in which logistic systems with a higher expressiveness are taken into account, 
i.e.,
- Extended HOL by Tom Melham (1993) [in my diagram classified as 3.1. Type 
Theory with Quantification over Types (type variables with explicit 
quantification)] and
- HOL-Omega by Peter V. Homeier (2009 ff.) [3.2. Type Theory with Abstraction 
over Types (type abstraction, i.e., type variables bound by lambda)],
both going beyond Isabelle/HOL in terms of expressiveness, according to my 
diagram at
        http://doi.org/10.4444/100.111
where Isabelle/HOL is classified with level 2 [2. Polymorphic Type Theory (PTT) 
(type variables without quantification)].

However, R0 provides syntactic features which allow to replace the HOL 
extensions with more natural means of expression:
        https://sourceforge.net/p/hol/mailman/message/35448609/

Kind regards,

Ken Kubota

____________________

Ken Kubota
http://doi.org/10.4444/100



> Am 11.12.2016 um 23:17 schrieb Makarius <makar...@sketis.net>:
> 
> On 11/12/16 22:57, Julius Michaelis wrote:
>> So what are the tales you should be telling?
>> 
>> On  3.12 20:53, Makarius wrote:
>>> In general, the thm type in Isabelle is definitely not what is being
>>> told in the common story about the "LCF approach", there is also not
>>> "the kernel" in Isabelle. We need to stop telling these tales.
> 
> Reading the sources, reading papers, asking people who should know how
> these systems are done, listening carefully. I do that myself when
> talking to the colleagues behind the other systems (HOL4, HOL-Light,
> Coq, ACL2, ProofPower).
> 
> The assumption that any of the LCF-style provers is 100% correct is
> wrong -- that was never the case in last decades. If your work depends
> on that assumption, you need to rethink what you are doing.
> 
> We need more clarity and more honesty.
> 
> 
>       Makarius
> 
> 


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
hol-info mailing list
hol-info@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/hol-info

Reply via email to