--- On Thu, 9/18/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

Sorry for being unclear. The two categories of AI that I refer to are the near 
term "smart internet" automated economy and longer term "artificial human" or 
transhuman phases. In the smart internet phase, individuals with competing 
goals own parts of the AGI (peers) and the message routing infrastructure 
provides a market that satisfies human goals efficiently. Peers work to satisfy 
the goals of their owners. Later, the network will be populated with 
intelligent peers that have their own goals independent of their (former) 
owners.

Just as the computation, storage, and communication eras of computing lack 
sharp boundaries, so will the automated economy and transhuman eras. Early on, 
people will add peers that try to appear human for various reasons, and with 
various degrees of success. These peers will know a lot about one person (such 
as its owner) and go to the net for more general knowledge about people. This 
becomes easier as computers get faster and surveillance becomes more pervasive. 
Basically, your CMR client knows everything you ever typed into a computer. 
People may program their peers to become autonomous and emulate their owners 
after they die. They might work, earn money, and pay for hosting. Later, peers 
may buy robotic bodies as the technology becomes available.

About intelligence testing, early AGI would pass an IQ test or Turing test by 
routing questions to the appropriate experts. Later, transhumans could do the 
same, only they might choose not to take your silly test.

>>So perhaps you could name some applications of AGI that don't fall into the 
>>categories of (1) doing work or (2) augmenting your brain?
>
>3) learning as much as possible

Early AGI would do so because it is the most effective strategy to meet the 
goals of its owners. Later, transhumans would learn because they want to learn. 
They would want to learn because this is a basic human goal which was copied 
into them. Humans want to learn because intelligence requires both the ability 
to learn and the desire to learn. Humans are intelligent because it increases 
evolutionary fitness.

>4) proving as many theorems as possible

Early AGI would route your theorem to theorem proving experts, rank the 
results, and use the results to improve future rankings and future routing of 
similar questions. Later, transhumans could just ask the net.

>5) figuring out how to improve human life as much as possible 

Early AGI will make the market more efficient, which improves the lives of 
everyone who uses it. Later, transhumans will have their own ideas what 
"improve" means. That is where AGI becomes dangerous.


-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to