--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Matt Mahoney wrote:
> > Because recursive self improvement is a competitive evolutionary process
> even
> > if all agents have a common ancestor.
> 
> As explained in parallel post:  this is a non-sequiteur.

OK, consider a network of agents, such as my proposal,
http://www.mattmahoney.net/agi.html
The design is an internet-wide system of narrow, specialized agents and an
infrastructure that routes (natural language) messages to the right experts. 
Cooperation with humans and other agents is motivated by an economy that
places negative value on information.  Agents that provide useful services and
useful information (in the opinion of other agents) gain storage space and
network bandwidth by having their messages stored and forwarded.  Although
agents compete for resources, the network is cooperative in the sense of
sharing knowledge.

Security is a problem in any open network.  I addressed some of these issues
in my proposal.  To prevent DoS attacks and vandalism, the protocol does not
provide a means to delete or modify messages once they are posted.  Agents
will be administered by humans who independently establish policies on which
messages to accept or ignore.  A likely policy is to ignore messages from
agents whose return address can't be verified, or messages unrelated to the
interests of the owner (as determined by keyword matching).  There is an
economic incentive to not send spam, viruses, false information, etc., because
malicious agents will tend to be blocked and isolated.  Agents will share
knowledge about other agents and gain a reputation by consensus.

I foresee a problem when the collective computing power of the network exceeds
the collective computing power of the humans that administer it.  Humans will
no longer be able to keep up with the complexity of the system.  When your
computer says "please run this program to protect your computer from the
Singularity worm", how do you know you aren't actually installing the worm?

I would be interested in alternative AGI proposals that solve this problem of
humans being left behind, but I am not hopeful that there is a solution.  When
machines achieve superhuman intelligence, humans will lack the cognitive power
to communicate with them effectively.  An AGI talking to you would be like you
talking to your dog.  I suppose that uploading and brain augmentation would be
solutions, but then we wouldn't really be human anymore.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=89629023-4b3a41

Reply via email to