Richard> But ... I find it deeply _implausible_ that there is no
Richard> better way to design a mind than through the computational
Richard> effort implicit in evolution.

Richard> In particular, can you summarize how your plausible arguments
Richard> address the idea that we have internal access to the
Richard> structure of our thoughts (good old fashioned
Richard> introspection***), and that if this internal access is
Richard> treated in the right way, it might give us plenty enough
Richard> clues as to the workings of minds at the concept level?

It might give us enough clues-- I didn't claim to prove otherwise,
and if you look at my paper 
"A Working Hypothesis for General Intelligence"
at http://whatisthought.com/eric.html
you will see that I am engaged in precisely such an effort.

However, the argument that it is not going to be possible, which
I claim is at the least plausible, is (very briefly) the following:
(1) understanding comes from Occam code, very concise code that solves
a bunch of naturally presented problems (and likely, only from that).
(2) Finding Occam code is a computationally hard problem, requiring
massive computational resources to achieve.
(3) Evolution, which has run through some 10^44 creatures, each of
which interacted with the world, had vastly more computational power
than we do.
(4) People are not capable of solving NP-hard problems, like finding
compressed code. Roughly speaking, there may be no much better way to
solve such problems than tinkering and testing.
(5) As you point out, AI researchers have been trying for a while to 
implement the results of their introspection, and they have not
notably succeeded. A reasonable explanation for why AI programs 
are generally clueless, is they are not nearly compressed enough.

There has been an argument the last few days on this list for opaque
representations. There is a reason why opaqueness may be essential:
understanding comes from compression, truly compressed code can not
be further compressed and hence can not be understood. But then,
there may be not much better to do than to tinker, and run to see
if the tinkering helped...

Eric Baum http://whatisthought.com



Richard> Enough understanding of how our own minds work that we might
Richard> be able to build a viable artificial one?  Granted, people
Richard> might not have made good use of that access yet, but that
Richard> does not speak to what they might do in the future.

Richard> I cannot see how anyone could come to a strong conclusion
Richard> about the uselessness of deploying that internal knowledge.

Richard> Richard Loosemore


Richard> *** Introspection, after all, is what all AI researchers use
Richard> as the original source of their algorithms.  Whether
Richard> committed to human-inspired AI, or to anti-human ;-)
Richard> Normative Rational AI, it was always some long-ago
Richard> introspection that was the original source of the ideas that
Richard> are now being formalized and implemented.  Even logical,
Richard> rational thought was noticed by the ancient Greek
Richard> philosophers who looked inside themselves and wondered how it
Richard> was that their thoughts could lead to conclusions about the
Richard> world.


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to