On Tue, Oct 14, 2008 at 8:36 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Ben,
> If you want to argue that recursive self improvement is a special case of
> learning, then I have no disagreement with the rest of your argument.
>
> But is this really a useful approach to solving AGI? A group of humans can
> generally make better decisions (more accurate predictions) by voting than any
> member of the group can. Did these humans improve themselves?
>
> My point is that a single person can't create much of anything, much less an
> AI smarter than himself. If it happens, it will be created by an organization 
> of
> billions of humans. Without this organization, you would probably not think to
> create spears out of sticks and rocks.
>
> That is my problem with the seed AI approach. The seed AI depends on the
> knowledge and resources of the economy to do anything. An AI twice as smart
> as a human could not do any more than 2 people could. You need to create an
> AI that is billions of times smarter to get anywhere.
>
> We are already doing that. Human culture is improving itself by accumulating
> knowledge, by becoming better organized through communication and
> specialization, and by adding more babies and computers.
>
>

You are slipping from strained interpretation of the technical
argument to the informal point that argument was intended to
rationalize. If interpretation of technical argument is weaker than
original informal argument it was invented to support, there is no
point in technical argument. Using the fact of 2+2=4 won't give
technical support to e.g. philosophy of solipsism.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to