Eric Baum wrote:
Richard> But ... I find it deeply _implausible_ that there is no
Richard> better way to design a mind than through the computational
Richard> effort implicit in evolution.
Richard> In particular, can you summarize how your plausible arguments
Richard> address the idea that we have internal access to the
Richard> structure of our thoughts (good old fashioned
Richard> introspection***), and that if this internal access is
Richard> treated in the right way, it might give us plenty enough
Richard> clues as to the workings of minds at the concept level?
It might give us enough clues-- I didn't claim to prove otherwise,
and if you look at my paper
"A Working Hypothesis for General Intelligence"
at http://whatisthought.com/eric.html
you will see that I am engaged in precisely such an effort.
However, the argument that it is not going to be possible, which
I claim is at the least plausible, is (very briefly) the following:
Every step of the following argument begs questions and lacks force:
(1) understanding comes from Occam code, very concise code that solves
a bunch of naturally presented problems (and likely, only from that).
"Understanding" is more than the finding of compact code. To identify
it with the compactness of the product is to trivialize it.
(2) Finding Occam code is a computationally hard problem, requiring
massive computational resources to achieve.
The concept of "Finding Occam code" is so imprecise that your statement
that it is computationally hard is impossible to justify. The only way
to make the statement rigorous is to define "finding Occam code" so
narrowly that it would then not mean the same thing as finding out how
to build an intelligent system.
(3) Evolution, which has run through some 10^44 creatures, each of
which interacted with the world, had vastly more computational power
than we do.
Suppose, for the sake of argument, that evolution had wasted
99.9999999999999999999999999999999999% of all that computation on
calculations like "x=x". In that case, I would be able to skip those
wasted steps and duplicate its achievement on the residual, meaningful
code using a pocket calculator. Therefore, nothing follows from your
statement.
(4) People are not capable of solving NP-hard problems, like finding
compressed code. Roughly speaking, there may be no much better way to
solve such problems than tinkering and testing.
Suppose that evolution *had* used its compute time efficiently, but that
the result of all that work was that evolution discovered that an
intelligent system could be built using a seed program (say a nice juicy
General Concept Learning Algorithm) equivalent to only 25 lines of code.
Further, suppose that our introspectional abilities were such that we
could actually look inside and see the 25 lines. These two assumptions
are completely consistent with what we know of the universe ... they
*could* be true. The problem of finding those 25 lines might well have
been NP-hard .... but ten minutes' introspection by the first AI
researcher would have revealed them, and then your statement that
"People are not capable of solving NP-hard problems" would be
meaningless, because that researcher would have done precisely that.
Hence, your conclusion does not follow.
(5) As you point out, AI researchers have been trying for a while to
implement the results of their introspection, and they have not
notably succeeded. A reasonable explanation for why AI programs
are generally clueless, is they are not nearly compressed enough.
This argument could equally well have been applied to the Wright
Brothers a year before their first flight: a reasonable explanation for
why you guys are flightless is that your machines don't flap enough.
Compression has as much to do with intelligence as flapping has to do
with the performance of aircraft.
Richard Loosemore
There has been an argument the last few days on this list for opaque
representations. There is a reason why opaqueness may be essential:
understanding comes from compression, truly compressed code can not
be further compressed and hence can not be understood. But then,
there may be not much better to do than to tinker, and run to see
if the tinkering helped...
Eric Baum http://whatisthought.com
Richard> Enough understanding of how our own minds work that we might
Richard> be able to build a viable artificial one? Granted, people
Richard> might not have made good use of that access yet, but that
Richard> does not speak to what they might do in the future.
Richard> I cannot see how anyone could come to a strong conclusion
Richard> about the uselessness of deploying that internal knowledge.
Richard> Richard Loosemore
Richard> *** Introspection, after all, is what all AI researchers use
Richard> as the original source of their algorithms. Whether
Richard> committed to human-inspired AI, or to anti-human ;-)
Richard> Normative Rational AI, it was always some long-ago
Richard> introspection that was the original source of the ideas that
Richard> are now being formalized and implemented. Even logical,
Richard> rational thought was noticed by the ancient Greek
Richard> philosophers who looked inside themselves and wondered how it
Richard> was that their thoughts could lead to conclusions about the
Richard> world.
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303