--- On Wed, 11/19/08, Daniel Yokomizo <[EMAIL PROTECTED]> wrote:

> I just want to be clear, you agree that an agent is able to create a
> better version of itself, not just in terms of a badly defined measure
> as IQ but also as a measure of resource utilization.

Yes, even bacteria can do this.

> Do you agree with the statement: "the global economy in which we live
> is a result of actions of human beings"? How would it be different for
> AGIs? Do you disagree that better agents would be able to build an
> equivalent global economy much faster than the time it took humans
> (assuming all the centuries it took since the last big ice age)?

You cannot separate AGI from the human dominated economy. AGI cannot produce 
smarter AGI without help from the 10^10 humans that are already here until 
machines have completely replaced the humans.

> I'm asking for your comments on the technical issues regardind seed AI
> and RSI, regardless of environment. Is there any technical
> impossibilities for an AGI to improve its own code in all possible
> environments? Also it's not clear to me which types of environments
> (if it's the boxing that makes it impossible, if it's an open
> environment with access to the internet, if it's both or neither) you
> see problems with RSI, could you ellaborate it further?

My paper on RSI refutes one proposed approach to AGI, which would be a self 
improving system developed in isolation. I think that is good because such a 
system would be very dangerous if it were possible. However, I am not aware of 
any serious proposals to do it this way, simply because cutting yourself off 
from the internet just makes the problem harder.

To me, RSI in an open environment is not pure RSI. It is a combination of self 
improvement and learning. My position on this approach is not that it won't 
work but that the problem is not as easy as it seems. I believe that if you do 
manage to create an AGI that is n times smarter than a human, then the result 
would be the same as if you hired O(n log n) people. (The factor of log n 
allows for communication overhead and overlapping knowledge). We don't really 
know what it means to be n times smarter, since we have no way to test it. But 
we would expect that such an AGI could work n times faster, learn n times 
faster, know n times as much, make n times as much money, and make prediction 
as accurately as a vote by n people. I am not sure what other measures we could 
apply that would distinguish greater intelligence from just more people.

So to make real progress, you need to make AGI cheaper than human labor for n = 
about 10^9. And that is expensive. The global economy has a complexity of 10^17 
to 10^18 bits. Most of that knowledge is not written down. It is in human 
brains. Unless we develop new technology like brain scanning, the only way to 
extract it is by communication at the rate of 2 bits per second per person.

> I want to keep this discussion focused on the technical
> impossibilities of RSI, so I'm going to ignore for now this side
> discussion about the global economy but later we can go
> back to it.

My AGI proposal does not require any technical breakthroughs. But for something 
this expensive, you can't ignore the economic model. It has to be 
decentralized, and there has to be economic incentives for people to transfer 
their knowledge to it, and it has to be paid for. That is the obstacle you need 
to think about.

-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to