On Wed, Nov 19, 2008 at 1:21 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Wed, 11/19/08, Daniel Yokomizo <[EMAIL PROTECTED]> wrote: > >> On Tue, Nov 18, 2008 at 11:23 PM, Matt Mahoney >> <[EMAIL PROTECTED]> wrote: >> > Seed AI is a myth. >> > http://www.mattmahoney.net/agi2.html (section 2). >> >> (I'm assuming you meant the section "5.1. >> Recursive Self Improvement") > > That too, but mainly in the argument for the singularity: > > "If humans can produce smarter than human AI, then so can they, and faster" > > I am questioning the antecedent, not the consequent. > > RSI is not a matter of an agent with IQ of 180 creating an agent with an IQ > of 190.
I just want to be clear, you agree that an agent is able to create a better version of itself, not just in terms of a badly defined measure as IQ but also as a measure of resource utilization. > Individual humans can't produce much of of anything beyond spears and clubs > without the global economy in which we live. To count as self improvement, > the global economy has to produce a smarter global economy. This is already > happening. Do you agree with the statement: "the global economy in which we live is a result of actions of human beings"? How would it be different for AGIs? Do you disagree that better agents would be able to build an equivalent global economy much faster than the time it took humans (assuming all the centuries it took since the last big ice age)? > My paper on RSI referenced in section 5.1 (and submitted to JAGI) only > applies to systems without external input. It would apply to the unlikely > scenario of a program that could understand its own source code and rewrite > itself until it achieved vast intelligence while being kept in isolation for > safety reasons. This scenario often came up on the SL4 list. It was referred > to AI boxing. It was argued that a superhuman AI could easily trick its > relatively stupid human guards into releasing it, and there were some > experiments where people played the role of the AI and proved just that, even > without vastly superior intelligence. > > I think that the boxed AI approach has been discredited by now as being > impractical to develop for reasons independent of its inherent danger and my > proof that it is impossible. All of the serious projects in AI are taking > place in open environments, often with data collected from the internet, for > simple reasons of expediency. My argument against seed AI is in this type of > environment. I'm asking for your comments on the technical issues regardind seed AI and RSI, regardless of environment. Is there any technical impossibilities for an AGI to improve its own code in all possible environments? Also it's not clear to me which types of environments (if it's the boxing that makes it impossible, if it's an open environment with access to the internet, if it's both or neither) you see problems with RSI, could you ellaborate it further? > It is extremely expensive to produce a better global economy. The current > economy is worth about US$ 1 quadrillion. No small group is going to control > any significant part of it. I want to keep this discussion focused on the technical impossibilities of RSI, so I'm going to ignore for now this side discussion about the global economy but later we can go back to it. > -- Matt Mahoney, [EMAIL PROTECTED] Best regards, Daniel Yokomizo ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
