[ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ]

In real analysis, it may be contentious what 0^0 ought to be, if anything at all.

In set theory (classical or constructive), X^0 is always 1, even when X=0. And so it is in type theory

This is because there is only one function 0->X (that with empty graph, or vacuously defined by zero cases).

So if you imagine 1=unit type="true" (and more generally any inhabited set/type X=true) and 0=_|_=false (the type with no inhabitants), then the "classical truth table" agrees with the constructive meaning of (X->Y) = Y^X. The constructive meaning gives you more, in that you are not restricted to 0 and 1 for the possibilities of what X and Y can be.

Of course, one should carefully distinguish the meanings of the notation _|_ invoked in the discussions in this list. It clearly has different meanings in e.g. domain theory and logic. In domain theory, it is not a type, but an element of any domain.

I hope this explains the confusion.

M.

On 05/05/13 22:04, Kalani Thielen wrote:
[ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ]

Thanks for the response.  Maybe I should have stated my question at the start 
instead of at the end, because I think that I might have been a bit unclear.  
Let me try again.

If |A->B| = |B|^|A| and ~A = A->_|_ then e.g. |~Bool| (a continuation on a Bool) = 
|Bool->_|_| = 0^2 = 0.  This seems to imply that there are 0 continuations on Bool 
values (or any other non-empty set, though there's still the identity function _|_->_|_ 
= 0^0 = 1).

I'm sure that my interpretation or expectation is wrong somewhere (I'm just an 
uneducated country programmer, like I said) and I'd appreciate somebody setting 
me straight.

I don't think that we need to worry about induction/coinduction, as there's no reference to 
recursive types here.  The considerations about CBV/CBN and Turing-completeness forcing _|_ into 
all types can be set aside too, I believe, by just considering the translation of classical logic 
(as described in e.g. Timothy Griffin's "A Formulae-as-Types Notion of Control") in the 
simply-typed lambda calculus.  Consider just CPS-transformed boolean functions then, where the only 
source of "non-termination" is in your expectation that invoking a continuation produces 
0 values (i.e.: it never returns).

Are there 0 continuations, or am I miscounting them somehow (or should counting not apply 
here)?  I'm just very curious to develop intuition about "classical types" a 
bit more, since I've found them to be very useful and reasonable in other respects (and 
as I said, I'm sure that it's just my intuition that's wrong or confused here).

Thanks.



On May 4, 2013, at 6:19 PM, Moez AbdelGawad <[email protected]> wrote:

Hi Kalani,

Here are my two cents.

_|_  (bottom/divergence) is a member of every type, including the "empty" type. 
 The cardinality of the empty type is thus 1, not 0.  Also, for the same reason, the 
cardinality of the Bool type is 3, not 2.

-> is the constructor of (lazy/call-by-name) continuous functions (ie, not of all possible 
mathematical functions). 'Continuous' here is in a domain-theoretic sense, sometimes also called 
'Scott-continuity'. The cardinality |A->B| is thus not necessarily |B|^|A|. Noting that A and B 
contain _|_ as noted above, |A->B| is |B|^|A| in case A and B are 'flat' domains/types. Without 
much ado, Bool and the empty type are flat.  Also, note that -o-> usually denotes the 
strict/eager/call-by-value version of ->. If, again, A and B are flat, |A-o->B| = |B|^(|A|-1), 
where |A|-1 is the cardinality of non-bottom elements in A, because -o-> can map _|_ in A only to 
_|_ in B, not to other elements of B.

As such, if I assume you meant eager functions over booleans, |Bool-o->Bool| = 3^2 = 9, not 
4  (I believe you can easily figure out these nine functions if you note that a function, in 
addition to returning true or false, may also diverge/"return" _|_).  Similarly, we 
have |A->_|_| = 1.

Regards,

-Moez

From: [email protected]
Date: Sat, 4 May 2013 11:49:48 -0400
To: [email protected]
CC: [email protected]
Subject: Re: [TYPES] [tag] Re: Declarative vs imperative

[ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ]

And even in the context of Curry-Howard correspondence, classical logic is a 
legitimate setting to discuss
languages with control operators, first-class continuations, static catch/throw 
a la Scheme etc.

I am not an academic or qualified in any way to speak on this, I'm just an 
uneducated, small town, country programmer, but on this point you've raised I 
wonder if the list could clear up a bit of confusion that I have.

I'm familiar with thinking of types as sets and logical connectives (type 
constructors) as operations on sets. So the type A*B has size |A|*|B|, the product of 
the sizes of two sets. A->B has size |B|^|A| and this works out well (e.g.: 
Bool->Bool has size 2^2 = 4: not, id, const True, const False).

So like you say, type negation corresponds to a continuation on that type (where a 
continuation doesn't return any value at all, satisfying the empty type). So 
~A=A->_|_. That interpretation works out really well too, because identities like 
A+B=~A->B can be read as compilation techniques for variants (with the obvious 
construction and destruction approaches).

But I'm not sure that I've got a straight story on this interpretation of 
negation, quite. I think that it's suggesting that the size of the set of 
continuations A->_|_ is |_|_|^|A|, or 0^|A|, which should be 0, right? So there 
are 0 continuations -- they're impossible to construct?

I appreciate any explanations y'all can offer on this point.

Regards.



On May 3, 2013, at 8:23 PM, Tadeusz Litak <[email protected]> wrote:

[ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ]

If I may chime in. The original point made by Uday re classical logic:

Coming to classical logic per se, I believe it is ill-fit for describing
computer programs or "processes"

is certainly worthy of attention. But it does not seem to imply the conclusion 
of that mail:

It is time to leave behind the classical logic. In fact, we should have
done it a long time ago.

(even if it wasn't intended, it does indeed sound "like a total and unconditional 
rejection"... such things happen in the fervor of a discussion :-)

"Logical pluralism" is a position rather well-established in the philosophy of 
logic. I would think that in the context of Computer Science, it is even more tempting.

[incidentally and perhaps contrary to established views, even Brouwer himself 
could be perhaps seen as one of first logical pluralists. While he very 
strongly rejected Fregean-Russellian logicism in *foundations of mathematics*, 
he has always held the work of Boole and the whole algebraic tradition in logic 
in high regard... But this is an aside]

It might even happen to be Uday's own position, if I understand correctly the remark that 
"we can perfectly well circumscribe various regimens for various purposes." 
Most of my email will elaborate on this.


I would simply say that whenever one wants, needs or has to think of all propositional 
formulas (also those possibly involving implication, and also those involving fusion, 
"tensor product" or what have you) as *rewritable to a conjunctive-disjunctive 
normal form without loss of information*, then the underlying domain logic is essentially 
classical.

It is hard to imagine whole areas of Theoretical CS if rewriting formulas to 
CNF or proofs by contradiction/contraposition/excluded middle are suddenly 
deemed outdated and/or illegal... I mean not only and not even primarily logic 
programming, but also finite model theory, complexity theory, 
ontologies/description logics or the whole PODS/VLDB community...

[actually, as a curious aside, the logic of database theorists, while certainly not constructive, is not fully 
classical either. They dread the top constant and unrestricted negation, preferring instead relative complement. This 
has to do with assumptions such as "closed world", "active domain" and the demand that queries are 
"domain independent". In short, their logic is rather that of Boolean rings without identity, which---funnily 
enough---also happen to be the setting of Stone's original work. It is just contemporary and ahistorical 
misrepresentation to say that Stone was working with "Boolean algebras". But this, again, is an aside...]

And even in the context of Curry-Howard correspondence, classical logic is a 
legitimate setting to discuss languages with control operators, first-class 
continuations, static catch/throw a la Scheme etc. There is so much stunningly 
beautiful work in that community that deserves to be better known...


But, equally obviously, not all the programming languages have such constructs. 
Furthermore, as linear logicians (already mentioned by Uday) will be happy to 
tell you, there are contexts when even intuitionistic notion of implication (so 
also the one of topos-theorists or proof-assistants, for example) is way too 
coarse-grained. Particularly when one wants, needs or has to be resource-aware. 
Also, the recent work of Wadler, Pfenning and other authors suggests that 
Curry-Howard correspondence for concurrency will have to do with linear rather 
than intuitionistic logic.


[And as substructural logicians will be happy to tell you, there are contexts 
where even linear logicians may seem coarse-grained, thick-skinned, 
corner-cutting brutes. :-) But this, yet again, is an aside.]

But where I most likely would part ways with Uday is when he claims (if I understand 
correctly) that we are approaching or even should approach "a final answer" of 
any kind. To me, searching for one logic valid in all CS-relevant contexts seems a rather 
misguided enterprise. Especially or at least when we talk about logic understood as a 
formal inference system.

What we perhaps need is more introductory logic courses---and also handbooks 
and monographs---for budding CS undergraduates and graduates (and perhaps also 
some postgraduates) which would make them understand the subtlety and 
complexity of the picture. And the benefits and costs of adopting specific 
inference rules.

Proof-assistant based courses seem to go in just the right direction. I am teaching right 
now one based on that excellent "Software Foundations" material of Benjamin 
Pierce et al. I think it changes and sharpens not only the thinking of students, but also 
that of the teacher himself (or herself :-).

But even this only goes so far---after all, the underlying logic is essentially 
intuitionistic... on the other hand, any weaker one could quickly become a 
nightmare for actually discussing things as demanding as semantics of 
programming languages (with bangs and exclamation marks in every second 
lemma... :-)


To conclude, a few minor points:


In fact, we cannot accept that we have a final answer until the entire natural 
language has been formalized

We'll wait for this only a little longer than for the invention of perpetuum 
mobile and heat death of the universe... :-)

And which "natural language" are we talking about? Sometimes I think the only reason why, 
e.g., Chomsky ever came up with the idea of "universal grammar" was that he did not speak 
too many languages in the first place (although Hebrew seems reasonably distant from English)...


(The view I take, following Quine, is that logic is a regimentation of natural 
language.

Same objection as above, and this is just to begin with.

[The only redeeming features of Quine were that he wrote well and had a certain logical 
culture. As a philosopher, in my opinion, he had a rather destructive influence on 
development of logic, particularly in philosophy departments, even if nowhere near as 
disastrous as the neopositivists or the majority of "analytic philosophers". 
But this is just one more aside...]


We can perfectly well circumscribe various regimens for various purposes.

As said above, I'm perfectly in agreement with this statement.



I am entirely happy with the characterization of logical connectives as "information 
composition" operators. But we can only accept it as a good, but vague, intuition. We do not 
know what this "information" is. Neither do we know what the information is about. So, in 
order to claim that classical logic is a canonical information composition calculus, somebody would 
need to formalize those notions.


I think I can agree with every word here. Perhaps the difference then is not so 
big...

I guess then that "leaving classical logic behind" meant rather "stop presenting it to students 
as the only, final and >>real<< formalism for Computer Scientists, everything else being a marginal 
pathology, if mentioned at all"... and if this was indeed intended by this remark, I would have a hard 
time disagreeing.

Okay... back then to popcorn and a comfortable seat in the audience...

Best,
t.






--
Martin Escardo
http://www.cs.bham.ac.uk/~mhe

Reply via email to