--- On Thu, 9/18/08, Bob Mottram <[EMAIL PROTECTED]> wrote:

> > And this is the problem.  Although some people have
> the goal of making
> > an artificial person with all the richness and nuance
> of a sentient
> > creature with thoughts and feelings and yada yada
> yada.. some of us
> > are just interested in making more intelligent systems
> to do automated
> > tasks.  For some reason people think we're going
> to do this by making
> > an artificial person and then enslaving them..
> that's not going to
> > happen because its just not necessary.
> 
> 
> In this case what you're doing is really narrow AI, not
> AGI.

Lets distinguish between the two major goals of AGI. The first is to automate 
the economy. The second is to become immortal through uploading.

The first goal does not require any major breakthroughs in AI theory, just lots 
of work. If you have a lot of narrow AI and an infrastructure for routing 
natural language messages to the right experts, then you have AGI. I described 
one protocol (competitive message routing, or CMR) to make this happen at 
http://www.mattmahoney.net/agi.html but the reality will probably be more 
complex, using many protocols to achieve the same result. Regardless of the 
exact form, we can estimate its cost. The human labor now required to run the 
global economy was worth US $66 trillion in 2006 and is increasing at 5% per 
year. At current interest rates, the value of an automated economy is about $1 
quadrillion. We should expect to pay this much, because there is a tradeoff 
between having it sooner and waiting until the cost of hardware drops.

This huge cost requires a competitive system with distributed ownership in 
which information has negative value and resource owners compete for attention 
and reputation by providing quality data. CMR, like any distributed knowledge 
base, is hostile: we will probably spend as many CPU cycles and human labor 
filtering spam and attacks as detecting useful features in language and video.

The second goal of AGI is uploading and intelligence augmentation. It requires 
advances in modeling, scanning, and programming human brains and bodies. You 
are programmed by evolution to fear death, so creating a copy of you that 
others cannot distinguish from you that will be turned on after you die has 
value to you. Whether the copy is really "you" and contains your consciousness 
is an unimportant philosophical question. If you see your dead friends brought 
back to life with all of their memories and behavior intact (as far as you can 
tell), you will probably consider it a worthwhile investment.

Brain scanning is probably not required. By the time we have the technology to 
create artificial generic humans, surveillance will probably be so cheap and 
pervasive that creating a convincing copy of you could be done just by 
accessing public information about you. This would include all of your 
communication through computers (email, website accesses, phone calls, TV), and 
all of your travel and activities in public places captured on video.

Uploads will have goals independent of their owners because their owners have 
died. They will also have opportunities not available to human brains. They 
could add CPU power, memory, I/O, and bandwidth. Or they could reprogram their 
brains, to live in simulated Utopian worlds, modify their own goals to want 
what they already have, or enter euphoric states. Natural selection will favor 
the former over the latter.

-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to