On 20 May 2009, at 00:01, John Mikes wrote:

> As always, thanks, Bruno for taking the time to educate this bum.
> Starting at the bottom:
> "To ask a logician the meaning of the signs, (...) is like asking
> the logician what is logic, and no two logicians can agree on the
> possible answer to that question."
> This is why I asked  --  YOUR  -- version.
> *
> "Logic is also hard to explain to the layman,..."
> I had a didactically gifted boss (1951) who said 'if you understand  
> something to a sufficient depth, you can explain it to any avarage  
> educated person'.
> And here comes my
> "counter-example" to your A&B parable: condition: I have $100 in my  
> purse.
> 'A'  means "I take out $55 from my purse" and it is true.
> 'B' means: I take out $65 from my purse - and this is also true.
> A&B is untrue (unless we forget about the meaning of & or and . In  
> any language.

As I said you are a beginner. And you confirm my theory that beginner  
can be  great genius! You have just discovered here the field of  
linear logic. Unfortunately the discovery has already been done by  
Jean-Yves Girard, a french logician. Your money example is often used  
by Jean-Yves Girard himself to motivate Linear logic. Actually my  
other motivation for explaining the combinators, besides to exploit  
the Curry Howard isomorphism, was to have a very finely grained notion  
of deduction so as to provide a simple introduction to linear logic.  
In linear logic the rule of deduction are such that the proposition  
"A" and the proposition "A & A" are not equivalent. Intuitionistic  
logic can be regain by adding a "modal" operator, noted "!" and read  
"of course A", and !A means A & A & A & ...

Now, a presentation of a logic can be long and boring, and I will not  
do it now because it is a bit out of topic. After all I was trying to  
explain to Abram why we try to avoid logic as much as possible in this  
list. But yes, in classical logic you can use the rule which says that  
if you have prove A then you can deduce A & A. For example you can  
deduce, from 1+1 = 2, the proposition 1+1=2 & 1+1=2. And indeed such  
rules are not among the rule of linear logic. Linear logic is a  
wonderful quickly expanding field with many applications in computer  
science (for quasi obvious reason), but also in knot theory, category  
theory etc.

The fact that you invoke a "counterexample" shows that you have an  
idea of what (classical) logic is.

But it is not a counter example, you are just pointing to the fact  
that there are many different logics, and indeed there are many  
different logics. Now, just to reason about those logics, it is nice  
to choose "one" logic, and the most common one is classical logic.

Logician are just scientist and they give always the precise axiom and  
rule of the logic they are using or talking about. A difficulty comes  
from the fact that we can study a logic with that same logic, and this  
can easily introduce confusion of levels.

> *
> "I think you are pointing the finger on the real difficulty of logic  
> for beginners...."
> How else do I begin than a beginner? to learn signs without meaning,  
> then later on develop the rules to make a meaning? My innate common  
> sense refuses to learn anything without meaning. Rules, or not  
> rules. I am just that kind of a revolutionist.

I think everybody agree, but in logic the notion of meaning is also  
studied, and so you have to abstract from the intuitive meaning to  
study the mathematical meaning. Again this needs training.

> Finally, (to begin with)
> ..."study of the laws of thought, although I would add probability  
> theory to it ...???"
> I discard probability as a count - consideration  inside a limited  
> (cut) model, 'count'
> - also callable: statistics, strictly limited to the given model- 
> content of the counting -
> with a notion (developed in same model) "what, or how many the next  
> simialr items MAY be" - for which there is no anticipation in the  
> stated circumstances. To anticipate a probability one needs a lot of  
> additional knowledge (and its critique) and it is still applicable  
> only within the said limited model-content.
> Change the boundaries of the model, the content, the statistics and  
> probability will change as well. Even the causality circumstances  
> (so elusive in my views).

I am afraid you are confirming my other theory according to which  
great genius can tell great stupidities (with all my respect  of  
course <grin>).
Come on John, there are enough real difficulties in what I try to  
convey that coming back on a critic of the notion of probability is a  
bit far stretched.  Einstein discovered the atoms with the Brownian  
motion by using Boltzmann classical physical statistics. I have heard  
that Boltzman killed himself due to the incomprehension of his  
contemporaries in front of that fundamental idea (judged obvious  
today). But today there is no more conceptual problem with most use of  
statistics 'except when used by politicians!).
Of course you are right, statistics depends on the "boundaries", but  
that is exactly the reason why we need a theory of probability, to  
avoid dishonest applications, and this has been done by Kolmogorov in  
a convincing way.
here, I was just following George Boole in defining, in a very general  
way, the laws of thought by LOGIC + PROBABILITY. This is still  
defensible if we accept those words in a large open minded sense.

I will have opportunities to say more when I will explain a bit more  
of the math, for UDA-step7, and a bit of AUDA,  to Kim.




You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to