At this moment this is true.  Another thing is if the computer could
become intelligent enough. It is not easy to admit that the belief in
the possibility of making something intelligent exist well before
computers. Since the industrial revolution, some people believed in
the possibility of making intelligent automatas only with steam, weels
and wires. This seems naive if not stupid not, but the theorical
possibility still holds.

I wonder how far the theory is from reality in the case of computers.
Up to now, even the most pessimistic previsions have been ridicule.
The gap between computer and a bacteria is inmense, galactic. This is
inherent to the limitations of any rational design in comparison with
the abundance and almost omniscence of natural selection (That I
explained somewhere else).

Moreover, a natural design is almost impossible to reverse engineer to
the last detail since it don´t attain to the rules of "god design",
because they are rules of "limited design" (explained somewhere else).

2012/9/3 Roger Clough <rclo...@verizon.net>:
> Hi benjayk
>
> Computers have no intelligence --not a whit,  since intelligence requires
> ability to choose, choice requires awareness or Cs, which in term requires
> an aware subject. Thus only living entities can have ingtelligence.
> A bacterium thus has more intel;ligence than a computer,
> even the largest in the world.
>
>
> Roger Clough, rclo...@verizon.net
> 9/3/2012
> Leibniz would say, "If there's no God, we'd have to invent him
> so that everything could function."
>
> ----- Receiving the following content -----
> From: benjayk
> Receiver: everything-list
> Time: 2012-09-03, 10:12:46
> Subject: Re: Simple proof that our intelligence transcends that of computers
>
> Bruno Marchal wrote:
>>
>>
>> On 25 Aug 2012, at 15:12, benjayk wrote:
>>
>>>
>>>
>>> Bruno Marchal wrote:
>>>>
>>>>
>>>> On 24 Aug 2012, at 12:04, benjayk wrote:
>>>>
>>>>> But this avoides my point that we can't imagine that levels, context
>>>>> and
>>>>> ambiguity don't exist, and this is why computational emulation does
>>>>> not mean
>>>>> that the emulation can substitute the original.
>>>>
>>>> But here you do a confusion level as I think Jason tries pointing on.
>>>>
>>>> A similar one to the one made by Searle in the Chinese Room.
>>>>
>>>> As emulator (computing machine) Robinson Arithmetic can simulate
>>>> exactly Peano Arithmetic, even as a prover. So for example Robinson
>>>> arithmetic can prove that Peano arithmetic proves the consistency of
>>>> Robinson Arithmetic.
>>>> But you cannot conclude from that that Robinson Arithmetic can prove
>>>> its own consistency. That would contradict G鰀el II. When PA uses the
>>>> induction axiom, RA might just say "huh", and apply it for the sake
>>>> of
>>>> the emulation without any inner conviction.
>>> I agree, so I don't see how I confused the levels. It seems to me
>>> you have
>>> just stated that Robinson indeed can not substitue Peano Arithmetic,
>>> because
>>> RAs emulation of PA makes only sense with respect to PA (in cases
>>> were PA
>>> does a proof that RA can't do).
>>
>> Right. It makes only first person sense to PA. But then RA has
>> succeeded in making PA alive, and PA could a posteriori realize that
>> the RA level was enough.
> Sorry, but it can't. It can't even abstract itself out to see that the RA
> level "would be" enough.
> I see you doing this all the time; you take some low level that can be made
> sense of by something transcendent of it and then claim that the low level
> is enough.
>
> This is precisely the calim that I don't understand at all. You say that we
> only need natural numbers and + and *, and that the rest emerges from that
> as the 1-p viewpoint of the numbers. Unfortunately the 1-p viewpoint itself
> can't be found in the numbers, it can only be found in what transcends the
> numbers, or what the numbers really are / refer to (which also completely
> beyond our conception of numbers).
> That's the problem with G鰀el as well. His unprovable statement about
> numbers is really a meta-statement about what numbers express that doesn't
> even make sense if we only consider the definition of numbers. He really
> just shows that we can reason about numbers and with numbers in ways that
> can't be captured by numbers (but in this case what we do with them has
> little to do with the numbers themselves).
>
> I agree that computations reflect many things about us (infinitely many
> things, even), but we still transcend them infinitely. Strangely you agree
> for the 1-p viewpoint. But given that's what you *actually* live, I don't
> see how it makes sense to than proceed that there is a meaningful 3-p point
> of view where this isn't true. This "point of view" is really just an
> abstraction occuring in the 1-p of view.
>
>
> Bruno Marchal wrote:
>>
>> Like I converse with Einstein's brain's book (� la Hofstatdter), just
>> by manipulating the page of the book. I don't become Einstein through
>> my making of that process, but I can have a genuine conversation with
>> Einstein through it. He will know that he has survived, or that he
>> survives through that process.
> On some level, I agree. But not far from the level that he survives in his
> quotes and writings.
>
>
> Bruno Marchal wrote:
>>
>>> That is, it *needs* PA to make sense, and so
>>> we can't ultimately substitute one with the other (just in some
>>> relative
>>> way, if we are using the result in the right way).
>>
>> Yes, because that would be like substituting a person by another,
>> pretexting they both obeys the same role. But comp substitute the
>> lower process, not the high level one, which can indeed be quite
>> different.
> Which assumes that the world is divided in low-level processes and
> high-level processes.
>
>
> Bruno Marchal wrote:
>>
>>> It is like the word "apple" cannot really substitute a picture of an
>>> apple
>>> in general (still less an actual apple), even though in many context
>>> we can
>>> indeed use the word "apple" instead of using a picture of an apple
>>> because
>>> we don't want to by shown how it looks, but just know that we talk
>>> about
>>> apples - but we still need an actual apple or at least a picture to
>>> make
>>> sense of it.
>>
>> Here you make an invalid jump, I think. If I play chess on a computer,
>> and make a backup of it, and then continue on a totally different
>> computer, you can see that I will be able to continue the same game
>> with the same chess program, despite the computer is totally
>> different. I have just to re-implement it correctly. Same with comp.
>> Once we bet on the correct level, functionalism applies to that level
>> and below, but not above (unless of course if I am willing to have
>> some change in my consciousness, like amnesia, etc.).
>>
> Your chess example only works because chess is already played on a computer.
> Yes, you can often substitute one computer for another (though even this
> often comes with problems), just as you can practically substitute apple
> juice with orange juice as a healthy morning drink. You still can't
> substitute it with fuel though, no matter what you do with it.
>
>
> Bruno Marchal wrote:
>>
>> With comp, to make things simple, we are high level programs. Their
>> doing is 100* emulable by any computer, by definition of programs and
>> computers.
> OK, but in this discussion we can't assume COMP. I understand that you take
> it for granted when discussing your paper (because it only makes sense in
> that context), but I don't take it for granted, and I don't consider it
> plausible, or honestly even meaningful.
>
>
> Bruno Marchal wrote:
>>
>>> I don't consider it false either, I believe it is just a question of
>>> what
>>> level we think about computation.
>>
>> This I don't understand. Computability does not depend on any level
>> (unlike comp).
> Assuming church-turing thesis ;).
>
> In my opinion that's precisely where it goes wrong. It wants to abstract
> from levels, but really just trivializes computation in the process
> (reducing it to the lowest level aspect of computation).
>
> I think what a computer computes does only make sense in the context of the
> machine. Eg if one turing machine emulates another the emulation just makes
> sense if we consider the turing machine that is emulated. Otherwise we can't
> state that it emulates anything (because its computation doesn't have to be
> interpreted as an emulation).
> This is also an argument against CT: If we take it to be true, the notion of
> emulation ceases to make sense (because emulation is not an absolute
> computational notion, but relates on computation with another).
>
> Even the computation 1+1=2 doesn't make sense apart from context. What do
> one thing and two things even mean if we try to completely abstract from
> things? Nothing.
>
>
> Bruno Marchal wrote:
>>
>>>
>>>
>>> Bruno Marchal wrote:
>>>>
>>>> It is not a big deal, it just mean that my ability to emulate
>>>> einstein
>>>> (cf Hofstadter) does not make me into Einstein. It only makes me able
>>>> to converse with Einstein.
>>> Apart from the question of whether brains can be emulated at all
>>> (due to
>>> possible entaglement with their own emulation, I think I will write
>>> a post
>>> about this later), that is still not necessarily the case.
>>> It is only the case if you know how to make sense of the emulation.
>>> And I
>>> don't see that we can assume that this takes less than being einstein.
>>
>> No doubt for the first person sense, that's true, even with comp. You
>> might clarify a bit more your point.
> Apparently you know what I mean if you say its true from the first person.
> But then considering that this is what we *actually experience*, I don't see
> how it makes any sense to try to abstract from that (postulating a "3-p
> perspective").
>
> In which way does one thing substitute another thing if actually the correct
> interpretation of the substitution requires the original? It is like saying
> "No you don't need the calculator to calculate 24,3^12. You can substitute
> it with pen and pencil, where you write down 24,3^12=X and then insert the
> result of the calculation (using your calculator) as X."
> If COMP does imply that interpreting a digital einstein needs a real
> einstein (or more) than it contradicts itself (because in this case we can't
> *always* say YES doctor, because then there would be no original left to
> interpret the emulation).
> Really it is quite a simple point. If you substitute the whole universe with
> an emulation (which is possible according to COMP) than there is nothing
> left to interpret the emulation. We couldn't even say whether it is an
> emulation or not (because a computation itself is not an emulation, just
> it's relation with the orginal). If there was something outside the universe
> to interpret the simulation, then this would be the level on which we can't
> be substituted (and if this would be substituted, then the level used to
> interpret this substitution couldn't be substituted, etc....).
> In any case, there is always a non-computational level, at which no digital
> substitution is possible - and we would be wrong to say YES with regards to
> that part of us, unless we consider that level "not-me" (and this doesn't
> make any sense to me).
>
> benjayk
> --
> View this message in context:
> http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34383322.html
> Sent from the Everything List mailing list archive at Nabble.com.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to