>> We haven't proved our brain is computational in nature, if we had, then we 
>> would had proven computationalism to be true... it's not the case. Maybe our 
>> brain has some non computational shortcut for that, maybe that's why AI is 
>> not possible, maybe our brain has this "realness" ingredient that 
>> computations alone lack. I'm not saying AI is not possible, I'm just saying 
>> we haven't proved that "our brains contain it".
I agree. Brain working is still not well enough understood and understood at 
the level of granularity and fine detail -- especially when looked at as a 
dynamic ever changing system -- to be able to clearly map out at each single 
step how consciousness, self-awareness and the other salient qualia associated 
with sentience and intelligence come to be inside of it. Sure we are learning 
things about the brain and about the neurochemical mechanisms of memory and 
perception. We do know a lot more than we did even ten years ago, but still -- 
I would argue -- we do not know enough in order to be able to say we can map 
the dynamic process by which the mind operates and rises up inside the brain. 
It's quite possible that we will discover -- in the end --  that we are 
massively parallel AI entities -- that our minds are fantastic computing 
machines, but until we have fully mapped the dynamic processes and can describe 
how these processes work -- and work with each other to form the very large 
scale distributed systems that surely are required for intelligence; it is best 
I believe to refrain from the temptation of positivism.
-Chris
 

________________________________
 From: Quentin Anciaux <allco...@gmail.com>
To: everything-list@googlegroups.com 
Sent: Wednesday, August 21, 2013 2:42 PM
Subject: Re: When will a computer pass the Turing Test?
  







2013/8/21 Telmo Menezes <te...@telmomenezes.com>

On Wed, Aug 21, 2013 at 2:39 PM, John Clark <johnkcl...@gmail.com> wrote:
>> Telmo Menezes <te...@telmomenezes.com>
>>
>>>>
>>>> >> So if the slave AI has a fixed goal structure with the number one goal
>>>> >> being  to always do what humans tell it to do and the humans order it to
>>>> >> determine  the truth or falsehood of something unprovable then its 
>>>> >> infinite
>>>> >> loop time  and you've got yourself a space heater not a AI.
>>>>
>>>
>>> > Right, but I'm not thinking of something that straightforward. We
>>> > Already have that -- normal processors. Any one of them will do precisely
>>> > what we order it to do.
>>
>>
>> Yes, and because the microprocessors in our computers do precisely what we
>> order them to do and not what we want them to do they sometimes go into
>> infinite loops, and because they never get bored they will stay in that loop
>> forever, or at least until we reboot our computer; if we're just using the
>> computer to surf the internet that's only a minor inconvenience but if the
>> computer were running a nuclear power plant or the New York Stock Exchange
>> it would be somewhat more serious; and if your friendly AI were running the
>> entire world the necessity of a reboot would be even more unpleasant.
>>
>>>>
>>>> >> Real minds avoid this  infinite loop problem because real minds don't
>>>> >> have fixed goals, real minds
>>>>  get bored and give up.
>>>
>>>
>>>
>>> > At that level, boredom would be a very simple mechanism, easily replaced
>>> > by something like: try this for x amount of time and then move on to 
>>> > another
>>> > goal
>>
>>
>> But how long should x be? Perhaps in just one more second you'll get the
>> answer, or maybe two, or maybe 10 billion years, or maybe never. I think
>> determining where to place the boredom point for a given type of problem may
>> be the most difficult part in making an AI;
>
>Would you agree that the universal dovetailer would get the job done?
>
>
>> Turing tells us we'll never find
>> a algorithm that works perfectly on all problems all of the time, so we'll
>> just have to settle for an algorithm that works pretty well on most problems
>> most of the time.
>
>Ok, and I'm fascinated by the question of why we haven't found viable
>algorithms in that class yet -- although we know has a fact that it
>must exist, because our brains contain it.
>

We haven't proved our brain is computational in nature, if we had, then we 
would had proven computationalism to be true... it's not the case. Maybe our 
brain has some non computational shortcut for that, maybe that's why AI is not 
possible, maybe our brain has this "realness" ingredient that computations 
alone lack. I'm not saying AI is not possible, I'm just saying we haven't 
proved that "our brains contain it".


Regards,
Quentin



>> And you're opening up a huge security hole, in fact they just don't get any
>> bigger, you're telling the AI that if this whole "always obey humans no
>> matter what" thing isn't going anywhere just ignore it and move on to
>> something else. It's hard enough to protect a computer when the hacker is no
>> smarter than you are, but now you're trying to outsmart a computer that's
>> thousands of times smarter than yourself. It can't be done.
>
>But you're thinking of smartness as some unidimensional quantity. I
>suspect it's much more complicated than that. As with life, we only
>really know one type of higher intelligence, but who's to say there
>aren't many others? The same way the field of artificial life started
>with the premise of "life as it could be", I think that it is viable
>to explore the idea of "intelligence as it could be" in AI.
>
>
>> Incidentally I've speculated that unusual ways to place the boredom point
>> may explain the link between genius and madness particularly among
>> mathematicians. Great mathematicians can focus on a problem with ferocious
>> intensity, for years if necessary, and find solutions that you or I could
>> not, but in everyday life that same attribute of mind can sometimes cause
>> them to behave in ways that seem to be at bit, ah, odd.
>
>Makes sense.
>
>Telmo.
>
>
>>  John K Clark
>>
>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to mailto:everything-list%2bunsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/groups/opt_out.
>
>--
>You received this message because you are subscribed to the Google Groups 
>"Everything List" group.
>To unsubscribe from this group and stop receiving emails from it, send an 
>email to mailto:everything-list%2bunsubscr...@googlegroups.com.
>To post to this group, send email to everything-list@googlegroups.com.
>Visit this group at http://groups.google.com/group/everything-list.
>For more options, visit https://groups.google.com/groups/opt_out.
>


-- 
All those moments will be lost in time, like tears in rain. 
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to