Ben Goertzel: Yes -- it is true, we have not created a human-level AGI yet. No
serious researcher disagrees. So why is it worth repeating the point?
Long ago I put Tintner in my killfile -- he's the only one there, and it's
regrettable but it was either that or start taking blood pressure
I asked: Imagine we have an AGI. What exactly does it do? What *should* it
do?
Note that I think I roughly understand Matt's vision for this: roughly, it is
google, and it will gradually get better at answering questions and taking
commands as more capable systems are linked in to the
Richard Loosemore: I am not sure I understand. There is every reason to
think that a currently-envisionable AGI would be millions of times smarter
than all of humanity put together. Simply build a human-level AGI, then get
it to bootstrap to a level of, say, a thousand times human speed
Samantha Atkins writes:
Beware the wish granting genie conundrum.
Yeah, you put it better than I did; I'm not asking what wishes we'd ask a genie
to grant, I'm wondering specifically what we want from the machines that Ben
and Richard and Matt and so on are thinking about and building.
Richard Loosemore: I am only saying that I see no particular limitations,
given the things that I know about how to buld an AGI. That is the best I can
do.
Sorry to flood everybody's mailbox today; I will make this my last message.
I'm not looking to impose a viewpoint on anybody; you have
Matt Mahoney writes: As for AGI research, I believe the most viable path is a
distributed architecture that uses the billions of human brains and computers
already on the Internet. What is needed is an infrastructure that routes
information to the right experts and an economy that rewards
Matt Mahoney writes: Super-google is nifty, but I don't see how it is AGI.
Because a super-google will answer these questions by routing them to
experts on these topics that will use natural language in their narrow domains
of expertise. All of this can be done with existing technology and a