On 12 Mar 2015, at 17:22, John Clark wrote:
On Wed, Mar 11, 2015 at 6:50 PM, meekerdb <[email protected]>
wrote:
>> I am claiming that when I receive anesthesia I become both
unintelligent and non-conscious. I am also claiming that when any of
my fellow human beings receive a anesthesia they behave
unintelligently, but I can make no conclusion of any sort regarding
the effect the drug has on their consciousness UNLESS I assume that
the Turing Test is valid and Darwin's Theory of Evolution is true.
> So you agree that we do judge whether or not beings are conscious
more or less accurately
I think so, but that's only because every human being who has ever
lived has implicitly assumed that the Turing Test is valid, it's
only when it's applied to computers that people suddenly want to
change the rules of the game.
The turing test is a good criteria for consciousness and intelligence.
It concerns only the easy part of the consciousness problem, and it
assumes some "real" universal number (the physical universe). It is
the best test FAPP, but the worst for explaining consciousness, and
matter.
> and so a theory about consciousness that predicts consciousness in
some situation might be empirically invalidated.
A theory of consciousness can be proven false *PROVIDED* you assume
as I do that the Turing Test is valid and Darwin's Theory of
Evolution is true; then a theory of consciousness is equivalent to a
theory of intelligence.
I can agree, but then you are deluded if this has something to do with
competence, and intelligence becomes what makes possible to get
competence and develop it, adapt it.
Consciousness is 1-self knowledge. Self-consciousness is when you
distinguish the 1-self from the 3-self.
It has nothing to do with what a machine can represent, unlike their
beliefs. You might extends intelligence too much, and close to the
other possible confusion, between consciousness and self-consciousness.
I can understand why armchair consciousness theorists are reluctant
to make that equivalence, it makes their job far far more difficult
because good intelligence theories are HARD, but if that assumption
is not made then no theory of consciousness is scientific.
The universal machines already refutes this, they can justify their
own incompleteness phenomenon. G proves <>t -> ~[]<>t.
Consciousness is a belief in a reality, be it only in a pain, or a
physical universe, or whatever bigger or simpler.
By Godel's completeness theorem, being consistent is about equal in
being satisfied by a reality.
Intelligence is more emotional, it is a state of mind, closer to
conscience than consciousness. It might need nothing more than a
loving mother, or having enough attention after birth, and not too
much: it is a complex art, not made easier by long stories and
collection of cultural prejudices.
I have two theory of intelligence:
The first one is based on reading the arithmetical beweisbar ~[]~t,
<>t, as "intelligent". Its negation, []f, is "stupidity". You can see
then that stupidity is mainly either the belief in its own
intelligence, or the belief in its own stupidity.
G proves []<>t -> []f (and G* proves []f -> f).
G* proves [][]f -> []f.
Note that those machines which believe that they are stupid, are much
less stupid than the machine who believe that their are intelligent.
Why?
Because, the machines which believe that they are stupid, only God
knows that they are stupid.
A machine which believes []^n f, is less stupid than a machine which
believes []^m f if m is least than n.
For those stupid machine which believe that they are intelligent
([]<>t), all machines (once Löbian) knows, soon or later, that they
are stupid, given that they can prove <>t -> ~[]<>t.
Read "believe" by "rationally justified", and <>t by NOT Beweisbar ("0
= 0") in arithmetic. Then G axiomatized what the machine cvan
rationally justify, and G* what is true. G* \ G axiomatizes what is
true about the machine, but that the (consistent) machine cannot
rationally justified.
And that theory, taking into account the nuance brought by Theaetetus
for the first person knowledge, and the nuance brought by the FPI,
gives the logic of the observable (the locally certain proposition),
like drinking a cup of coffee (in the preceding though experience),
and that is testable.
Observable is more like intelligence + some reality, and sensible is
more like intelligence + some reality + some truth.
Bruno
Bruno
John K Clark
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.