Eric Baum wrote:

John> Fully decoding the human genome is almost impossible.  Not only
John> is there the problem of protein folding, which I think even
John> supercomputers can't fully solve, but the purpose for the
John> structure of each protein depends on interaction with the
John> incredibly complex molecular structures inside cells.

Yes, but you have all kinds of advantages in decoding the genome
that you don't have, for example, in decoding the human mind
(although you might have in an AGI): such as the ability to perform
ingenious knockout experiments, comparative genomics, etc.

Also, the
John> genetic code for a human being is basically made of the same
John> elements that the genetic code for the lowliest single-celled
John> creature is made of, and yet it somehow describes the initial
John> structure of a system of neural cells that then developes into a
John> human brain through a process of embriological growth (which
John> includes biological interaction from the mother -- why you can't
John> just grow a human being from an embryo in a petri dish), and
John> then a fairly long process of childhood development.

John> This is the way evolution created mind somewhat randomly over
John> three billion (and a half?) years.  The human mind is the
John> pinnacle of this evolution. With this mind along with collective
John> intelligence, it shouldn't take another three billion years to
John> engineer intelligence.  Evolution is slow -- human beings can
John> engineer.

Yes, but
(a) evolution had vastly more computational power than we did-- it
had the ability to use this method to design the brain; and
(b) plausible arguments (see What is Thought?) suggest that there
may be no better way to design a mind;

But ... I find it deeply _implausible_ that there is no better way to design a mind than through the computational effort implicit in evolution.

In particular, can you summarize how your plausible arguments address the idea that we have internal access to the structure of our thoughts (good old fashioned introspection***), and that if this internal access is treated in the right way, it might give us plenty enough clues as to the workings of minds at the concept level? Enough understanding of how our own minds work that we might be able to build a viable artificial one? Granted, people might not have made good use of that access yet, but that does not speak to what they might do in the future.

I cannot see how anyone could come to a strong conclusion about the uselessness of deploying that internal knowledge.

Richard Loosemore


*** Introspection, after all, is what all AI researchers use as the original source of their algorithms. Whether committed to human-inspired AI, or to anti-human ;-) Normative Rational AI, it was always some long-ago introspection that was the original source of the ideas that are now being formalized and implemented. Even logical, rational thought was noticed by the ancient Greek philosophers who looked inside themselves and wondered how it was that their thoughts could lead to conclusions about the world.








-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to