George Levy wrote: >With my background in electronic engineering, I am moderately versed in >logic, in particular Boolean logic. I am sorry for my long hmmmm. The >going got rough when you started talking about knowability and >believability. But I realize that if we are to investigate consciousness >these are ideas that have to be talked about.

## Advertising

Yes. >So let's go with it.. I promise I'll give it a shot. Take your time for ruminating this. >It will be very >instructive.... However it would help if together with the string of >symbols there was an English translation. OK. Just tell me when you don't understand. >> The semantics of well formed formula like p & (q v r) >> will follow by the usual use of truth table. For exemple: >> >> A v B A -> B >> 1 1 1 1 1 1 >> 1 1 0 1 0 0 >> 0 1 1 0 1 1 >> 0 0 0 0 1 0 > >hmmm.... do you mean? > > A v B A -> B > 1 1 1 1 1 1 > 1 0 1 1 0 0 > 0 1 1 0 1 x > 0 0 0 0 1 x In classical propositional logic the "or" is inclusive. That is A v B is true by definition in case A is true or B is true or if both A and B are true. The latin has a word for both "or": *vel* (inclusive logical or, and *aut* exclusive logical or). By definition A -> B is false only when it is falsifiable, that is when A is true and B false. In fact A -> B can be considered as an abreviation of -(A & -B). It is called *material implication* and is a priori special. C.L. Lewis (re)invented Logic last century (20) in part for capturing more reasonable "implication" (by [](p -> q)). Note that it made p->(q->p) a tautology. A formula true for any valuations. Some philosophical logician consider that as a "paradox": we have indeed: "the fact that I am a flying pig *materially implies* the truth of Fermat last theorem", and also "the fact that I am a flying pig materially implies the fact that Moscow is the capital of USA". Of course it is not a contradiction (with what we know!) unless you find a proof (or any real evidence) that *I am a flying pig*. Just interpret the intended meaning of "p->q" as a shortand of -(p & -q). I guess you know "de Morgan laws": -(A & B) <-> -A v -B -(A v B) <-> -A & -B We have also A -> B is equivalent with -B -> -A "Equivalent" is taken here either in the intuitive meaning, or in the sense that the formula ((A -> B) <-> (-B -> -A)) is a tautology (true for all valuations) as we can verify exhaustively: ((A -> B) <-> (-B -> -A)) 1 1 1 1 01 1 01 1 0 0 1 10 0 01 0 1 1 1 01 1 10 0 1 0 1 10 1 10 using the table for p <-> q A <-> B or if you prefer <->1 0 (like a multiplication table) 1 1 1 1 1 0 1 0 0 0 0 1 0 0 1 0 1 0 or by saying that (A <-> B) is an abbreviation of (A->B) & (B->A). >> Exercices. Show that the following sentences are valid: >> >> p -> <>p >> []p -> [][]p >> p -> []<>p >> <>p -> []<>p >> [](p->q) -> ([]p -> []q) >> >> Of course if <>p -> []<>p is valid, <>TRUE -> []<>TRUE is >> certainly valid to, and so our "godel second theorem" >> <>TRUE -> -[]<>TRUE is certainly NOT valid with Leibniz >> semantics. This just means that formal provability cannot >> play the role of the leibnizian "necessity". >> Kripke generalisation of Leibniz semantics will provide >> the necessary tools. >> > >OK Nice. If someone has not see that <>p -> []<>p is valid in Leibnizian semantics, please insist for explanation. But we will explain it again with the Kripkian semantics, actually. >> The non logician should note that with a semantics we can >> reason on the validity of sentences without having a formal >> system in which we could *prove* those formula. >> A "difficult" exercice would consist in finding a formal >> system which would axiomatize the Leibnizian formula. >> >> (In fact it is axiomatized by the system known as S5 which >> has the axioms: >> >> [](p->q) -> ([]p -> []q) >> []p->p >> <>p -> []<>p >> >> + the classical tautologies. >> >> with the inferences rules: >> >> p p->q p >> ------- --- + a substitution rule >> q []p >> > >This string of symbols does not mean anything to me... Is there a real >life model to which it applies, a story, a game, anything to give it >meaning? There is a lot of (boring) games, for sure! I explain it firstly for classical propositional logic itself. You know, for a logician, a theory is just a set of formulas, called axioms, and a set of rules of inference, from which new formulas (called theorems) can be derived. For exemple, here is the Hilbert Ackerman axiomatisation (theory) of Classical Propositionnal Logic. It is a theory with the following 3 axioms: Axiome a p -> (q -> p) Axiome b (p -> q) -> (-q -> -p) Axiome c (p -> (q -> r)) -> ((p -> q) -> (p -> r)) Now, if there is no inference rules, you cannot prove any theorems, except the three axioms themselves. An inference rule is a recipe which make possible to derive new formula (called theorems) from the axioms. The two rules of Hilbert Ackerman are 1) the substitution rule and 2) the modus ponens rule. The substitution rule is awkward to describe precisely, but it just says that you can make careful substitution. So you can derive from the first axiom q -> (p -> q) or ((p -> r) -> (q -> (p -> r)), etc. The very fundamental rule we will have in all logics is the modus ponens: it is written (and I explain what that means after!): A A -> B ------------ Modus Ponens B It means that if you have found a proof of A and if you have found a proof of A -> B, then you can deduce B (and now you have a proof of B). What is a (formal) proof ? It is just a sequence of formula which are either axioms or which are deduced from the axioms or from theorems (which has been obtained until now) by a finite number of application of the rule of inference. A formal proof is just a syntactical derivation which should be "mechanicaly" verifiable. Exemple or game! Find a proof of the formula p -> p in the Hilbert Ackerman system. (It is a little tricky, and nobody should waste his/her time on such boring exercice). But it is nice for illustrating what is an axiomatic system and what are proofs in that system). But it is worth to verify that the following solution is indeed a proof. The solution: 1) (p -> ((p -> p) -> p)) -> ((p -> (p -> p)) -> (p -> p)) This is an instance of axiome c with the substitution q/(p -> p) r/p 2) (p -> ((p -> p) -> p)) This is an instance of axiome a with the subst. q/(p -> p) 3) (p -> (p -> p)) -> (p -> p) This follows from 1 and 2 by the Modus Ponens rule 4) (p -> (p -> p)) This follow from axiome a with the subst. q/p 5) p -> p This follows from 3) and 4) by Modus Ponens again. Now you will tell me: why ? It is much more easy to look at the truth table, isn't it. The answer is multiple. 1) The difficulty comes here from the fact that the Hilbert Ackermann axiomatic theory is very concise. 2) Most logical system have no truth table. 3) (And this is the main point) the interesting thing are the relation between the syntactical and combinatorial world of the proofs and the semantical world of the worlds in which these truth applies. You should convince yourself that 1) All formulas provable in the Hilbert Ackerman system are tautologies. This is not difficult. Just verify that the axioms are tautologies and that tautologicalness is preserved by the application of the rules of inference. We say that the Hilbert Ackerman axiomatic is SOUND with respect to classical (truth table) semantics. 2) All tautologies are provable in the Hilbert Ackerman axiomatic. Actually this is not easy. I will explain later how to prove similar result. We say that the Hilbert Ackerman axiomatic is COMPLETE with respect to classical (truth table) semantics. Remember that a logician has two brains (like everyone), a left brain and a right brain. The left brain understand only formal mechanical but "easily" communicable (verifiable) proofs, the right brain understands only semantics, meaning, attribution of truth values to proposition in worlds. A logician is happy when he has a formal system with both a theory (axioms and rules) a semantics (a mathematical structure ...., we will see) accompagnied by a soundness result (the theory proofs correct theorems with respect to the semantics), and a completeness result (the theory proofs all formulas which are true with respect to the semantics). "Modern logic" can be said to be the science of the communication between the left and the right brain, if you accept the image. Let us go back to S5: >> [](p->q) -> ([]p -> []q) >> []p->p >> <>p -> []<>p >> >> + the classical tautologies. >> >> with the inferences rules: >> >> p p->q p >> ------- --- + a substitution rule >> q []p Of course the necessitation rule (p/[]p) just say that if you have find a proof of A, then you can deduce []A. In particular <>p -> []<>p being a theorem (it is an axiom, so it is trivialy a theorem!), [](<>p -> []<>p), [][](<>p -> []<>p), [][][](<>p -> []<>p), are also theorems by simple application of the necessitation rule. We have completeness and soundness of S5 with respect to Leibniz semantics: S5 proves A if and only if A is valid in all the (Leibnizian) worlds. As always, soundness is more easy to prove that completeness. I will give later hint to prove completeness for the modal logics. Could you prove in S5 that p -> []<>p ? With the completeness result, we know there is a proof! (because p -> []<>p is Leibnizian valid as we have suggest as an exercice in the preceding post). Semantics are also useful to prove decidability and other nice (meta)logical properties of theories. >Thanks for making this effort. Thanks for your effort and your patience. Bruno