--- Russell Wallace <[EMAIL PROTECTED]> wrote:

> On Wed, Apr 30, 2008 at 5:29 PM, Matt Mahoney <[EMAIL PROTECTED]>
> wrote:
> >  By modeling symbolic knowledge in a neural network.  I realize it
> is
> >  horribly inefficient, but at least we have a working model to
> >  start from.
> 
> Inefficient is reasonable, but how do you propose to do it at all?

By distributing the problem across the internet.  AGI can be divided
into lots of specialized experts and a network for getting messages to
the right experts.  http://www.mattmahoney.net/agi.html

I estimate the cost of AGI will be a substantial fraction of the value
of the human labor it replaces, worth about US $2 to 5 quadrillion over
the next 30 years worldwide.  Since there is no funding source this
big, AGI must be a decentralized network of autonomous peers that have
an incentive to cooperate.  In an economy where information has
negative value on average (e.g. advertising), peers must compete for
reputation and audience by providing the most useful information.  This
provides an incentive to intelligently filter incoming messages and
route them to the appropriate experts by understanding their content.

"Understanding" can be as simple as matching terms in two documents, or
something more complex, such as matching a video clip to a text or
audio description.  However, there is an incentive to develop
sophisticated solutions (e.g. distinguish TV programming from
commercials).  This is the S part of the problem, essentially a
hierarchical adaptive pattern recognition problem that could be
implemented as a neural network or something similar on each peer.  For
language, the pattern hierarchy is letters -> words -> semantic
categories -> grammatical structures.  The task is divided by pattern. 
A peer whose expertise is recognizing when a picture contains an animal
could route the message to peers that recognize cats or dogs.  I
believe that extremely narrow domains are practical in a network with
billions of peers.

The D part is "old school" AI, calculators, databases, theorem provers,
programs that play chess, etc.  Interfacing these to natural language
is a job for the S peers, matching the most common expressions to their
formal equivalents.  This is not a hard problem in narrow domains.

The AGI is "friendly" as long as humans make the bulk of decisions
about what information is valuable.  However, as hardware gets more
powerful, this may not always be the case.  I don't pretend that this
architecture is ultimately safe.  Two long term risks:

- Slow takeoff.  The protocol evolves from natural language to
something incomprehensible as the attention of peers competing for
computing resources for recursive self improvement overrides the need
for human attention.

- Fast takeoff.  Peers with language and programming skills use human
trickery and also discover thousands of security vulnerabilities in
thousands of software applications and take over every computer on the
internet.  It continues to appear to function normally.


-- Matt Mahoney, [EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to