Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore
Matt Mahoney wrote: --- Mike Tintner [EMAIL PROTECTED] wrote: My point was how do you test the *truth* of items of knowledge. Google tests the *popularity* of items. Not the same thing at all. And it won't work. It does work because the truth is popular. Look at prediction markets. Look at

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Perhaps you have not read my proposal at http://www.mattmahoney.net/agi.html or don't understand it. Some of us have read it, and it has nothing whatsoever to do with Artificial Intelligence. It is a labor-intensive

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Samantha  Atkins
On Apr 9, 2008, at 12:33 PM, Derek Zahn wrote: Matt Mahoney writes: Just what do you want out of AGI? Something that thinks like a person or something that does what you ask it to? The or is interesting. If it really thinks like a person and at at least human level then I doubt very

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Perhaps you have not read my proposal at http://www.mattmahoney.net/agi.html or don't understand it. Some of us have read it, and it has nothing whatsoever to do with Artificial Intelligence. It is a

RE: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Derek Zahn
I asked: Imagine we have an AGI. What exactly does it do? What *should* it do? Note that I think I roughly understand Matt's vision for this: roughly, it is google, and it will gradually get better at answering questions and taking commands as more capable systems are linked in to the

RE: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Derek Zahn
Richard Loosemore: I am not sure I understand. There is every reason to think that a currently-envisionable AGI would be millions of times smarter than all of humanity put together. Simply build a human-level AGI, then get it to bootstrap to a level of, say, a thousand times human speed

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore
Derek Zahn wrote: I asked: Imagine we have an AGI. What exactly does it do? What *should* it do? Note that I think I roughly understand Matt's vision for this: roughly, it is google, and it will gradually get better at answering questions and taking commands as more capable systems are

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore
Derek Zahn wrote: Richard Loosemore: I am not sure I understand. There is every reason to think that a currently-envisionable AGI would be millions of times smarter than all of humanity put together. Simply build a human-level AGI, then get it to bootstrap to a level of, say, a

RE: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Derek Zahn
Samantha Atkins writes: Beware the wish granting genie conundrum. Yeah, you put it better than I did; I'm not asking what wishes we'd ask a genie to grant, I'm wondering specifically what we want from the machines that Ben and Richard and Matt and so on are thinking about and building.

RE: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Derek Zahn
Richard Loosemore: I am only saying that I see no particular limitations, given the things that I know about how to buld an AGI. That is the best I can do. Sorry to flood everybody's mailbox today; I will make this my last message. I'm not looking to impose a viewpoint on anybody; you have

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Just what do you want out of AGI? Something that thinks like a person or something that does what you ask it to? Either will do: your suggestion achieves neither. If I ask your non-AGI the following question: How