Matt Mahoney wrote:
--- Mike Tintner [EMAIL PROTECTED] wrote:
My point was how do you test the *truth* of items of knowledge. Google tests
the *popularity* of items. Not the same thing at all. And it won't work.
It does work because the truth is popular. Look at prediction markets. Look
at
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Perhaps you have not read my proposal at
http://www.mattmahoney.net/agi.html
or don't understand it.
Some of us have read it, and it has nothing whatsoever to do with
Artificial Intelligence. It is a labor-intensive
On Apr 9, 2008, at 12:33 PM, Derek Zahn wrote:
Matt Mahoney writes:
Just what do you want out of AGI? Something that thinks like a
person or
something that does what you ask it to?
The or is interesting. If it really thinks like a person and at
at least human level then I doubt very
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Perhaps you have not read my proposal at
http://www.mattmahoney.net/agi.html
or don't understand it.
Some of us have read it, and it has nothing whatsoever to do with
Artificial Intelligence. It is a
I asked: Imagine we have an AGI. What exactly does it do? What *should* it
do?
Note that I think I roughly understand Matt's vision for this: roughly, it is
google, and it will gradually get better at answering questions and taking
commands as more capable systems are linked in to the
Richard Loosemore: I am not sure I understand. There is every reason to
think that a currently-envisionable AGI would be millions of times smarter
than all of humanity put together. Simply build a human-level AGI, then get
it to bootstrap to a level of, say, a thousand times human speed
Derek Zahn wrote:
I asked:
Imagine we have an AGI. What exactly does it do? What *should* it do?
Note that I think I roughly understand Matt's vision for this: roughly,
it is google, and it will gradually get better at answering questions
and taking commands as more capable systems are
Derek Zahn wrote:
Richard Loosemore:
I am not sure I understand.
There is every reason to think that a currently-envisionable AGI would
be millions of times smarter than all of humanity put together.
Simply build a human-level AGI, then get it to bootstrap to a level of,
say, a
Samantha Atkins writes:
Beware the wish granting genie conundrum.
Yeah, you put it better than I did; I'm not asking what wishes we'd ask a genie
to grant, I'm wondering specifically what we want from the machines that Ben
and Richard and Matt and so on are thinking about and building.
Richard Loosemore: I am only saying that I see no particular limitations,
given the things that I know about how to buld an AGI. That is the best I can
do.
Sorry to flood everybody's mailbox today; I will make this my last message.
I'm not looking to impose a viewpoint on anybody; you have
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Just what do you want out of AGI? Something that thinks like a person or
something that does what you ask it to?
Either will do: your suggestion achieves neither.
If I ask your non-AGI the following question: How
11 matches
Mail list logo