Wei Dai wrote:

>Thanks for your answers. They are very helpful.

Y're welcome. I want just add something.

Your general question was "Why using modal logic when
quantifying on worlds is enough". My basic answer was
that Kripke's possible world semantics works only on a
subset of the possible modal logics.
You can do modal logics without semantics. In fact modal
logic appeared because of apparent existence of modalities.
The main one is "possible" and "necessary". But others
occurred like "permitted" and "obligatory"; "provable" and
consistent", "believable" and "imaginable", etc.
The fundamental motivation of a logician is to give purely
syntactical formula and rules for manipulating formula so that
we can reason and communicating reasoning *without* any
meaning. The traditional joke is that a logician does not
want understand what he talk about!
When you do that you fall automatically on the following sort
of problem:

Take a formal theory like S4, again:
(I suppose a language with the usual logical symbol
including the propositional constant f and t, + the [])

AXIOMS:   <axioms of classical propositional logic>
           [](p -> q) -> ([]p -> []q)
           []p -> p
           []p -> [][]p

RULES:    p p->q       p
          --------,    ---
             q         []p

This gives at once an infinity of formula: those derivable
from the axioms by using finitely many times the rules.

I recall that -p is an abbreviation for (p->f), and <>p is
an abbreviation for -[]-p.

The question is: is the formula []p -> <>[]p derivable in S4?
Now that question is not so hard and you can solve it *syntactically*.
Just find a pathway from the axioms to the formula by using the
rules of inference. (That is: just prove []p -> <>[]p in S4).
But I can say this because I know the solution!
Now logician want not only prove theorems in their system, they
want also know if the system is consistent, that is, if the system
does not prove f, and question like that.
For example, and this is a *very* difficult exercise, try to prove
that S4 does *not* prove the formula p -> <>[]p.

Before the rise of semantics such question was almost not answerable
in general. You cannot solve them by searching all the proofs
because you have an infinity of proofs.

Something like Kripke semantics makes such an exercise very easy,
once you have soundness and completeness metatheorem relating your
logic (here S4) with the semantics.

Now it not very difficult to prove such completeness and soundness
theorem for system as simple as S4 (see also ref in the archive
below). I give you only the two main metatheorems we can use here.

I recall that a Kripke frame is just a set (of "worlds") with some
binary relation among them (the accessibility relation).
A model is a frame with, for each world w, a function from L in {0,1}.
L = our set of propositional letters {p, q, r, ...}.
(That is: a model assigns truth to the proposition in each world).
I recall also that classical logic is verify in each world, that is:
if p is true in world w and q is true in world w, then "p->q" is
true in world w, etc. Now the S4/Kripke-semantics soundness
and completeness (S&C) metatheorem is:

   S&C theorem: S4 proves A if and only if A is true in any
   world of any model based on a reflexive and transitive frame.

Now, if "p -> []<>p" was derivable in S4 it would follow from
the S&C metatheorem that "p -> []<>p" would be true in any world
of any model based on a reflexive and transitive frame.

So, to prove that "p -> []<>p" is not a theorem of S4 it is
enough to find a model based on a reflexive and transitive frame
in which  "p -> []<>p" is false in some world.

Let us build that counterexample. For having "p -> []<>p" false
in a world w1, by classical logic, you need a world with "p" true
in it, and "[]<>p" false in it, i.e. "-[]<>p" true in it.
But "-[]<>p" is equivalent with "<>[]-p", so it is enough to join
a unique world w2 with "-p" true in it.
Our counterexample is:
   The frame = {w1, w2}
   The accessibility relation R is given by w1Rw1, w2Rw2, w1Rw2
      (if you prefer: R = {(w1 w1) (w2 w2) (w1 w2)}
   The model is given by making p true in w1, and false in w2.
R being reflexive and transitive, and "p -> []<>p" being false at
w1, "p -> []<>p" has been shown not derivable in S4.

Another use of Kripke C&S result is to show that S4, for example,
is decidable (and then write a theorem prover for S4). This is easy
if you succeed in refining the completeness part of the C&S theorem
above with "finite frame" instead of any frame. In that case
you know that if a counterexample exist you can find it.

But Kripke semantics is useless with a "non normal"
logic, for example a modal logic without the necessitation rule.
Chellas excellent book has a chapter on Scott-Montague semantics
(also known as "minimal model") which can be used in the same
way for weaker modal logic. The Scott-Montague semantics gives
"topological" or "quasi-topological" structure on the set of worlds.
In Kripke []p is true at world w if p is true at all worlds x such
that wRx. In Scott-Montague semantics []p is true at w if the set
of worlds in which p is true belongs to the neighborhoods of w.
Just to give you an idea of another semantics for modal logic.


PS I recall my conversation on modal logic with George Levy in
this list. For example:

Modalities and Aristotle Square

Leibtniz Semantics

Kripke Semantics

Reply via email to