Robin Green wrote:
Thanks very much for your explanation - that was very helpful.
Here is what I have written so far about OTT for my MRes literature
review; please let me know if you see anything wrong. As I'm new to
this, there probably is.
A couple of small points seem relevant.
Observational Type Theory[McBrideOTT] is a theory under development -
which the authors aim to implement in Epigram 2 - which tries to make
reasoning with propositional extensional equality more convenient. Like
NuPRL, it has a general theory of equality in types, involving not just
function types but sums, products, etc. Also, as in NuPRL, equality is
recursive in the sense that equality on compound types, such as
function types and \Sigma -types, is defined in terms of the equality
on (and pairwise equality _of_) their constituent types.
It might be worth mentioning the ways in which it's unlike NuPRL. In the
spirit of intensional type theories, all the evidence is present in
terms. A key objective (we have a prototype, but we haven't done all the
math) is that typechecking terms (as opposed to checking derivation
trees built arbitrarily from the rules) should be decidable.
The basic idea of OTT is that equality should be _observational_: we
should not be able to make observations (apart from time/space usage
observations) which tell two equal terms apart. That is to say, all
eliminators must respect the equality on the types they eliminate -
that is sufficient, because we cannot observe values other than via
eliminators (function application is an eliminator).
I'd rather spin it the other way. It's basically Leibniz's intuition:
two things should be equal if every property (expressible within the
theory) which holds for one also holds for the other. Somehow, the
"observations" come first. When we're defining what "observational
equality" means for a given type-former, we're obliged to consider all
the eliminators for that type-former and collect all the guarantees
about the elements that we need if those eliminators are to preserve
equality. That's a general approach which we can (and will) take with us
to systems other than Pi, Sigma, W, 0, 1, 2.
types, this means that if we create a quotient type T/R and name it Q,
we need to somehow canonicalise values of T that are equivalent under R
to equal values - for example, values in Integer/(\lambda
x,y\rightarrow(x=y)\bmod2) could be canonicalised to 0 or 1. This could
be done either upon construction - so that the carrier of Q is the set
of canonical values - or upon elimination, so that the carrier of Q is
T. In a referentially-transparent language, the latter is not
necessarily inefficient, because we can memoise the results of the
Quotient types don't (won't? shouldn't? can't?) rely on a notion
(effective or otherwise) of canonical form, although it's always nice if
you can get one. I'm planning to implement quotients as a class of
abstract datatypes, with
A:*; ~:A->A->*; Q:EqRel A ~
Quot A (~) Q : *
a : A
[a] : Quot A (~) Q
x : Quot A (~) Q
P : Quot A (~) Q -> *
p : Pi a:A. P [a]
q : Pi a,a':A. a ~ a' -> (p a : P [a]) = (p a' : P [a'])
expose x P p q : P x
That's to say, access to the element of the carrier is a privilege which
comes with the explicit responsibility to respect the equivalence. We're
correspondingly entitled to consider [a],[a'] : Quot A (~) Q to be
observationally equal if a ~ a'. Jacking that up to a heterogeneous
definition is a bit annoying but not problematic. (When is one class in
one quotient equal to another class in another quotient? When the
quotients are equal and the representatives are equivalent!)
An interesting question is what the definitional equality should do with
quotients, although an acceptable minimum is to reduce to the
definitional equalty on the carrier.
What has happened here? We've relocated the proof obligation that you
have when defining setoid functions, however trivial, to the place where
the work is really needed.
It seems that OTT with quotient types will be enough to model anything
- we can define any data type or computation, with any equivalence
relations we like; we don't need anything else. In particular, we don't
need the fiddly setoid model commonly used in ITT. Having personally
tried to formalise category theory using setoids in Haskell, I can
testify that it is a pain, so I look forward to trying OTT.
Depending on your capacity for enduring primitive conditions, this may
be possible quite soon...
However, establishing that OTT has certain important metatheoretical
properties such as normalisation is future work.
There's been quite a lot of progress since that paper. Wouter's Agda
development shows that the structural coercion mechanism is normalising
(if induction-recursion is normalising). Moreover, the weird "s [Q:S =
T> == s if S == T" rule doesn't seem to mess things up as badly as I had
feared. In particular, it now looks like we don't need to use it in
evaluation, just in deciding the conversion of neutral terms.
Intuitively, an identity coercion can only inhibit evaluation if it is
hiding a canonical constructor of a canonical type from something which
would otherwise consume it, but under those circumstances, the
structural rules push the coercion under the constructor anyway. Both
the implementation and the metatheory are looking a lot simpler than
they did a year ago.
All the best