Thanks. But like I said, airy generalities.
That machines can become faster and faster at computations and accumulating
knowledge is certain. But that's narrow AI.
For general intelligence, you have to be able first to integrate as well as
accumulate knowledge. We have learned vast amounts about the brain in the
last few years, for example - perhaps more than in previous history. But
this hasn't led to any kind of comparably fast advances in integrating that
knowledge.
You also have to be able second to discover knowledge - be creative - fill
in some of the many gaping holes in every domain of knowledge. That again
doesn't march to a mathematical formua.
Hence, I suggest, you don't see any glimmers of RSI in any actual domain of
human knowledge. If it were possible at all you should see some signs
however small.
The whole idea of RSI strikes me as high-school naive - completely lacking
in any awareness of the creative, systemic structure of how knowledge and
technology actually advance in different domains.
Another example: try to recursively improve the car - like every part of
technology it's not a solitary thing, but bound up in vast technological
ecosystems (here - roads,oil,gas stations etc etc), that cannot be improved
in simple, linear fashion.
Similarly, I suspect each individual's mind/intelligence depends on complex
interdependent systems and paradigms of knowledge. And so of necessity would
any AGI's mind. (Not that mind is possible without a body).
Matt:> Here is Vernor Vinge's original essay on the singularity.
http://mindstalk.net/vinge/vinge-sing.html
The premise is that if humans can create agents with above human
intelligence, then so can they. What I am questioning is whether agents at
any intelligence level can do this. I don't believe that agents at any
level can recognize higher intelligence, and therefore cannot test their
creations. We rely on competition in an external environment to make
fitness decisions. The parent isn't intelligent enough to make the correct
choice.
-- Matt Mahoney, [EMAIL PROTECTED]
----- Original Message ----
From: Mike Tintner <[EMAIL PROTECTED]>
To: [email protected]
Sent: Thursday, August 28, 2008 7:00:07 PM
Subject: Re: Goedel machines (was Re: Information theoretic approaches to
AGI (was Re: [agi] The Necessity of Embodiment))
Matt:If RSI is possible, then there is the additional threat of a fast
takeoff of the kind described by Good and Vinge
Can we have an example of just one or two subject areas or domains where a
takeoff has been considered (by anyone) as possibly occurring, and what
form such a takeoff might take? I hope the discussion of RSI is not
entirely
one of airy generalities, without any grounding in reality.
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com