On 19 May 2015, at 15:53, Jason Resch wrote:
On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou <[email protected]
> wrote:
On 19 May 2015 at 14:45, Jason Resch <[email protected]> wrote:
>
>
> On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou <[email protected]
>
> wrote:
>>
>> On 19 May 2015 at 11:02, Jason Resch <[email protected]> wrote:
>>
>> > I think you're not taking into account the level of the
functional
>> > substitution. Of course functionally equivalent silicon and
functionally
>> > equivalent neurons can (under functionalism) both instantiate
the same
>> > consciousness. But a calculator computing 2+3 cannot substitute
for a
>> > human
>> > brain computing 2+3 and produce the same consciousness.
>>
>> In a gradual replacement the substitution must obviously be at a
level
>> sufficient to maintain the function of the whole brain. Sticking a
>> calculator in it won't work.
>>
>> > Do you think a "Blockhead" that was functionally equivalent to
you (it
>> > could
>> > fool all your friends and family in a Turing test scenario into
thinking
>> > it
>> > was intact you) would be conscious in the same way as you?
>>
>> Not necessarily, just as an actor may not be conscious in the
same way
>> as me. But I suspect the Blockhead would be conscious; the
intuition
>> that a lookup table can't be conscious is like the intuition that
an
>> electric circuit can't be conscious.
>>
>
> I don't see an equivalency between those intuitions. A lookup
table has a
> bounded and very low degree of computational complexity: all
answers to all
> queries are answered in constant time.
>
> While the table itself may have an arbitrarily high information
content,
> what in the software of the lookup table program is there to
> appreciate/understand/know that information?
Understanding emerges from the fact that the lookup table is immensely
large. It could be wrong, but I don't think it is obviously less
plausible than understanding emerging from a Turing machine made of
tin cans.
The lookup table is intelligent or at least offers the appearance of
intelligence, but it makes the maximum possible advantage of the
space-time trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff
The tin-can Turing machine is unbounded in its potential
computational complexity, there's no reason to be a bio- or silico-
chauvinist against it. However, by definition, a lookup table has
near zero computational complexity, no retained state.
But it is counterfactually correct on a large range spectrum. Of
course, it has to be infinite to be genuinely counterfactual-correct.
Does an ant trained to perform the look table's operation become
more aware when placed in a vast library than when placed on a small
bookshelf, to perform the identical function?
Are you not doing the Searle's level confusion? The consciousness (if
there is one) is the consciousness of the person, incarnated in the
program. It is not the consciousness of the low level processor, no
more than the physicality which supports the ant and the table.
Again, with comp there is never any problem with all of this. The
consciousness is an immaterial attribute of an immaterial program/
machine's soul, which is defined exclusively by a class of true number
relations.
The task of a 3p machine consists only in associating that
consciousness to your local reality, but the body of the machine, or
whatever 3p you can associate to the machine, is not conscious, and,
to be sure, does not even exist as such.
I am aware it is hard to swallow, but there is no contradiction (so
far). And to keep comp, and avoid attributing a mind or worst a
"partial" mind to people without brain, or to movie (which handles
only very simple computations (projections)), I don't see any other
option (but fake magic).
It is perhaps helpful to see that this reversal makes a theory like
Robinson arithmetic into a TOE, and to start directly with it.
In that case all what we deal with is defined in term of arithmetical
formula, that is, in term of 0, s, + and *.
The handling of the difference between object and their description is
made explicit, through the coding, or Gödel-numbering, or programming,
of the object concerned.
For example, in the combinators, the number 0 is some times defined by
the combinator SKK, the expression "SKK" can be represented by a Gödel
number (in many different way), attributing to 0 a rather big number
representing its definition, and this distinguish well 0 and its
representation (which will be a rather big number). Proceeding in this
way, we avoid the easy confusions between the object level and the
metalevel, and we can even mixed them in a clean way, as the metalevel
embeds itself at the object level (which is what made Gödel and Löb's
arithmetical self-reference possible to start with).
I would have believed this almost refute comp, if there were not that
quantum-Everett confirmation of that admittedly shocking self-
diffraction. (which is actually nothing compared to the one with
information elimination or dissociation, needed for a "semantic of
first person dying" in that self-diffracting reality).
We can't know the truth, but we can study the consequence of comp
about that, and what machines can say about that, and justify, not
justify, expresses or not expresses, hope or fear, etc.
Bruno
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.