On 18 Dec 2017, at 07:48, Brent Meeker wrote:
On 12/17/2017 8:06 AM, Bruno Marchal wrote:
On 15 Dec 2017, at 22:19, Brent Meeker wrote:
On 12/15/2017 9:24 AM, Bruno Marchal wrote:
that the statistics of the observable, in arithmetic from inside,
have to "interfere" to make Digital Mechanism making sense in
cognitive science, so MW-appearances is not bizarre at all: it
has to be like that. Eventually, the "negative amplitude of
probability" comes from the self-referential constraints (the
logic of []p & <>p on p sigma_1, for those who have studied a
little bit).
Can you explicate this.
Usually, notions like necessity, certainty, probability 1, etc. are
assumed to obey []p -> p. This implies also []~p -> ~p, and thus p -
> <>p, and so, if we have []p -> p, we have [] -> <>p (in classical
normal modal logics).
Then provability, and even more "formal provability" was considered
as as *the* closer notion to knowledge we could hope for,
Something a mathematician or logician might dream, but not a mistake
any physicist would ever make. Knowledge is correspondence with
reality, not deducibility from axioms.
Which reality?
Since Gödel we do distinguish correspondence with the arithmetical
reality and deducibilty from axioms. We know that *all* effective
theories can only scratch the arithmetical truth.
You seem to identify reality with physical reality. That is a strong
physicalist axiom. When doing metaphysics with the scientific method,
especially on the mind-body problem, it is better to be more neutral.
and so it came as a shock that no ("rich enough") theory can prove
its own consistency. This means for example that neither ZF nor PA
can prove ~[]f, that is []f -> f,
This seems to me incorrectly rely on []f->f being equivalent to ~f-
>~[]f and ~f=t. I know that is standard first order logic, but in
this case we're talking about the whole infinite set of expressible
propositions. It's not so clear to me that you can rely on the law
of the excluded middle over this set.
We limit ourself to correct machine, by construction. It does not
matter how they are implemented below their substitution level, and
this is only what correct machine can prove on themselves at their
correct substitution level, and any higher order correct 3p description.
That is all what we need to extract the "correct physics". No need to
interview machines which believe they are Napoleons. I mean it is
premature to invoke them in the fabric of the physical reality
(despite it is unclear what is the part of possible lie at play here,
cf Descarte's malin démons)
and so such machine cannot prove generally []p ->, and provability,
for them, cannot works as a predicate for knowledge, and is at most
a (hopefully correct) belief.
Now, this makes also possible to retrieve a classical notion of
knowledge, by defining, for all arithmetical proposition p, the
knowledge of p by []p & true(p).
I'm not impressed.
You should!
The beauty is that "Bp & p" leads to an explanation of why the machine
get suck in infinities when trying to know who she is. from the
machine's view, this looks quite like a soul, or subject of
consciousness, which "of course" cannot justify any 3p account of him.
from its point of view, the doppelganger is a construction which
proves that he is not a machine, and that the doppelganger is an
impostor! The beauty of "Bp & p" is that it says "no" to the doctor!
The machine's elementary first insight is that she is no machines at
all, and she is right from that points of view, as G* can justify.
Unfortunately, we cannot define true(p) in arithmetic (Tarski), nor
can we define knowledge at all (Thomason, Scott-Montague). But for
eaxh arithmetical p, we can still mimic knowledge by []p & p,
Since you can't define knowledge, how can you say you can mimic it?
All (serious) philosophers agree that knowledge is well axiomatized by
the modal logic T and S4 (T + Bp -> BBp).
"Bp & p", applied to the K4 reasoner (close to full self-referential
ability) gives S4, and is called "the standard theory of knowledge" by
both scholars in antic philosophy, and in artificial intelligence (a
rare agreement in philosophy).
What the machine cannot do is to define itself as a knower. That is
why she will be unable to recognize itself at any substitution level,
and that is why she will have to trust the doctor, or prey, because
nobody can tell her who she is, and which computations support her in
arithmetic.
for each p, and this lead to a way to associate canonically a
knower to the machine-prover. It obeys to a knowledge logic (with
[]p -> p becoming trivial). That logic is captured soundly and
completely by the logic S4Grz (already described in many posts).
Similarly, the logic G of arithmetical self-reference cannot be a
logic of probability one, due to the fact that []p does not imply
<>p (which would again contradict incompleteness). It entails in
the Kripke semantics that each world can access to a cult-de-sac
world in which []p is always true, despite there is no worlds
accessible to verify such facts.
But why should we accept that as a good model of inference? It does
not make intuitive sense to say []p is true in some world where p is
neither true nor even possible. What would be an example of such a
world given a proposition like "7 is prime."?
"7 si prime" is true in all worlds/models-of-löbian-machine. But
"provable(0 = 1)" is true only in the cul-de-sac world (corresponding
to alterated state of consciousness/non-standard model (say)). To
avoid them we have to define a new box [im]p =[]p & <>t; to ensure the
"cup of coffee" certainty in the WM duplication experience.
We get a logic of probability by ensuring that "we are not in a cul-
de-sac world",
But isn't that equivalent to saying "anything is possible"?
On the contrary. It is a way to avoid "anything is necessary". In the
cul-de-sac world, everything is necessary, and nothing is possible. Bf
is verified "trivially" in the end-world, because they can't access to
any world. (alpha R beta -> beta verfies f) is always true because
alpha R beta is always false when alpha is an end-world. Of course,
end-world are consistent: from Bf you can't derive f.
which is the main default assumption need in probability calculus.
In that case, you can justify, for example, that when you are
duplicated in Washington and Moscow, the probability of getting a
cup of coffee is one, when the protocol ensure the offering of
coffee at both place: []p in that case means "p is true in all
accessible words, and there is at least one".
So, by incompleteness, [] & <>t provides a "probability one"
notion, not reducible to simple provability ([]p).
Then, by step 8, we are in arithmetic (in the model of arithmetic,
"model" in the logician's sense), and we translate computationalism
by restricting the accessible "p" to the leaves of the universal
dovetailing. By Gödel+Church-Turing-Kleene we can represent those
"leaves" by the semi-computable predicates: the sigma_1 sentences.
When we do this, we have to add the axiom "p->[]p" to G. This gives
G1 (and G1*). It is enough thanks to a proof by Visser. For the
logic of the nuances brought by incompleteness, like []p & p, and
[]p & <>t, it gives the logic S4Grz1 and the logic Z1*. Then, we
can extract an arithmetical interpretation of intuitionist logic
from S4 (in a usual well known way), and, a bit less well known, we
can extract a minimal quantum logic from B, and then from Z1* which
is very close to B, using a "reverse" Goldblatt transform (as
Goldblatt showed how the modal logic B (main axioms []p -> p, p ->
[]<>p, and NOT= []p -> [][]p) is a modal version of minimal quantum
logic.
I don't see that you have explicated negative amplitude of
probability:
Can you build a quantum logic with only positive amplitude of
probability?
Answer: yes that does exist, for dimension 2, but with Gleason
theorem, quantum logic + dim bigger than 3 entails "negative amplitude
of probability".
Bruno
"Eventually, the "negative amplitude of probability" comes from the
self-referential constraints (the logic of []p & <>p on p sigma_1,
for those who have studied a little bit). "
Brent
Note that here "[] and "<>" are arithmetical predicate. We do not
assume more than Q, and use only internal interpretabilities of the
observer-machines.
This is explained in most of my papers, but the details are in the
long french text "Conscience et Mécanisme".
Bruno
Brent
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com
.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.