[ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ]

Dear Sandro,

Yes, you're right -- I didn't answer the question, since I was too
taken by the subject line. :)

Anyway, I do think that HoTT with a non-reducible univalence axiom is
still a programming language, because we can give a computational
interpretation to that language: for example, you could follow the
strategy of Angiuli, Harper and Wilson's POPL 2017 paper,
*Computational Higher-Dimensional Type Theory*.

Another, simpler example comes from Martin Escardo's example upthread
of basic Martin-Löf type theory with the function extensionality
axiom. You can give a very simple realizability interpretation to the
equality type and extensionality axiom, which lets every compiled
program compute.

What you lose in both of these cases is not the ability to give a
computational model to the language, but rather the ability to
identify normal forms and to use an oriented version of the equational
theory of the language as the evaluation mechanism.

This is not an overly shocking phenomenon: it occurs even in much
simpler languages than dependent type theories. For example, once you
add the reference type `ref a` to ML, it is no longer the case that
the language has normal forms, because the ref type does not have
introduction and elimination rules with beta- and eta- rules.

Another way of thinking about this is that often, we *aren't sure*
what the equational theory of our language is or should be. This is
because we often derive a language by thinking about a particular
semantic model, and don't have a clear idea of which equations are
properly part of the theory of the language, and which ones are
accidental features of the concrete model.

For example, in the case of name generation – i.e., ref unit – our
intuitions for which equations hold come from the concrete model of
nominal sets. But we don't know which of those equations should hold
in all models of name generation, and which are "coincidental" to
nominal sets.

Another, more practical, example comes from the theory of state. We
all have the picture of memory as a big array which is updated by
assembly instructions a la the state monad. But this model incorrectly
models the behaviour of memory on modern multicore systems. So a
proper theory of state for this case should have fewer equations
than what the folk model of state validates.


Best,
Neel

On 19/05/2021 09:03, Sandro Stucki wrote:
Talia: thanks for a thought-provoking question, and thanks everyone else for all the interesting answers so far!

Neel: I love your explanation and all your examples!

But you didn't really answer Talia's question, did you? I'd be curious to know where and how HoTT without a computation rule for univalence would fit into your classification. It would certainly be a language, and by your definition it has models (e.g. cubical ones) which, if I understand correctly, can be turned into an abstract machine (either a rewriting system per your point 4 or whatever the Agda backends compile to). So according to your definition of programming language (point 3), this version of HoTT would be a programming language simply because there is, in principle, an abstract machine model for it? Is that what you had in mind?

Cheers
/Sandro


On Wed, May 19, 2021 at 6:21 AM Neel Krishnaswami <[email protected] <mailto:[email protected]>> wrote:

    [ The Types Forum,
    http://lists.seas.upenn.edu/mailman/listinfo/types-list
    <http://lists.seas.upenn.edu/mailman/listinfo/types-list> ]

    Dear Talia,

    Here's an imprecise but useful way of organising these ideas that I
    found helpful.

    1. A *language* is a (generalised) algebraic theory. Basically, think
         of a theory as a set of generators and relations in the style of
         abstract algebra.

         You need to beef this up to handle variables (e.g., see the work of
         Fiore and Hamana) but (a) I promised to be imprecise, and (b) the
         core intuition that a language is a set of generators for terms,
         plus a set of equations these terms satisfy is already totally
         visible in the basic case.

         For example:

         a) the simply-typed lambda calculus
         b) regular expressions
         c) relational algebra

    2. A *model* of a a language is literally just any old mathematical
         structure which supports the generators of the language and
         respects the equations.

         For example:

         a) you can model the typed lambda calculus using sets
            for types and mathematical functions for terms,
         b) you can model regular expressions as denoting particular
            languages (ie, sets of strings)
         c) you can model relational algebra expressions as sets of
            tuples

    2. A *model of computation* or *machine model* is basically a
         description of an abstract machine that we think can be implemented
         with physical hardware, at least in principle. So these are things
         like finite state machines, Turing machines, Petri nets, pushdown
         automata, register machines, circuits, and so on. Basically, think
         of models of computation as the things you study in a computability
         class.

         The Church-Turing thesis bounds which abstract machines we think it
         is possible to physically implement.

    3. A language is a *programming language* when you can give at least
         one model of the language using some machine model.

         For example:

         a) the types of the lambda calculus can be viewed as partial
            equivalence relations over Gödel codes for some universal turing
            machine, and the terms of a type can be assigned to equivalence
            classes of the corresponding PER.

         b) Regular expressions can be interpreted into finite state
    machines
            quotiented by bisimulation.

         c) A set in relational algebra can be realised as equivalence
            classes of B-trees, and relational algebra expressions as nested
            for-loops over them.

        Note that in all three cases we have to quotient the machine model
        by a suitable equivalence relation to preserve the equations of the
        language's theory.

        This quotient is *very* important, and is the source of a lot of
        confusion. It hides the equivalences the language theory wants to
        deny, but that is not always what the programmer wants – e.g., is
        merge sort equal to bubble sort? As mathematical functions, they
        surely are, but if you consider them as operations running on an
        actual computer, then we will have strong preferences!

    4. A common source of confusion arises from the fact that if you have
         a nice type-theoretic language (like the STLC), then:

         a) the term model of this theory will be the initial model in the
            category of models, and
         b) you can turn the terms into a machine
            model by orienting some of the equations the lambda-theory
            satisfies and using them as rewrites.

         As a result we abuse language to talk about the theory of the
         simply-typed calculus as "being" a programming language. This is
         also where operational semantics comes from, at least for purely
         functional languages.

    Best,
    Neel

    On 18/05/2021 20:42, Talia Ringer wrote:
     > [ The Types Forum,
    http://lists.seas.upenn.edu/mailman/listinfo/types-list
    <http://lists.seas.upenn.edu/mailman/listinfo/types-list> ]
     >
     > Hi friends,
     >
     > I have a strange discussion I'd like to start. Recently I was
    debating with
     > someone whether Curry-Howard extends to arbitrary logical
    systems---whether
     > all proofs are programs in some sense. I argued yes, he argued
    no. But
     > after a while of arguing, we realized that we had different
    axioms if you
     > will modeling what a "program" is. Is any term that can be typed
    a program?
     > I assumed yes, he assumed no.
     >
     > So then I took to Twitter, and I asked the following questions (some
     > informal language here, since audience was Twitter):
     >
     > 1. If you're working in a language in which not all terms compute
    (say,
     > HoTT without a computational interpretation of univalence, so not
    cubical),
     > would you still call terms that mostly compute but rely on axioms
     > "programs"?
     >
     > 2. If you answered no, would you call a term that does fully
    compute in the
     > same language a "program"?
     >
     > People actually really disagreed here; there was nothing resembling
     > consensus. Is a term a program if it calls out to an oracle?
    Relies on an
     > unrealizable axiom? Relies on an axiom that is realizable, but
    not yet
     > realized, like univalence before cubical existed? (I suppose some
    reliance
     > on axioms at some point is necessary, which makes this even
    weirder to
     > me---what makes univalence different to people who do not view
    terms that
     > invoke it as an axiom as programs?)
     >
     > Anyways, it just feels strange to get to the last three weeks of my
     > programming languages PhD, and realize I've never once asked what
    makes a
     > term a program 😅. So it'd be interesting to hear your thoughts.
     >
     > Talia
     >

Reply via email to