On 12/27/2016 11:34 PM, Telmo Menezes wrote:
On Wed, Dec 28, 2016 at 8:17 AM, Brent Meeker <[email protected]> wrote:
Exactly so.  Once we can engineer robots to act with human like
intelligence, questions about consciousness will be seen as either
meaningless or "the wrong question".
I think it is very unlikely that we will engineer human-level
intelligence directly. It seems more plausible that we will engineer
the process that will allow it to develop to that level of complexity.
Then we will be left with exactly the same questions, except that we
will have some super-complex process running on FPGAs and GPUs or
whatever it is, instead of just wet neurons.
I agree with that. Already some of the most advanced AI's use neural nets that have to be trained. So we won't know exactly how they think. But there is a difference. First, we can make a copy of an AI. Second, we can determine exactly what it's thinking process is. So even if we can't directly program in more or less empathy, more of less humor, etc; we'll be able to learn how emotions and values are implemented. Whether such AI's are conscious or not will be like asking whether your car is animated or not.

I'm not sure anything
will change in that regard, except perhaps the shattering of another
version of the illusion that there is something special about our
species.

The other thing that will change is that we will soon be the second smartest species on the planet - displacing chimpanzees to third.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to