From: Matt Mahoney [mailto:[EMAIL PROTECTED]
--- John G. Rose [EMAIL PROTECTED] wrote:
There is no way to know if we are living in a nested simulation, or
even
in a
single simulation. However there is a mathematical model: enumerate
all
Turing machines to find one that
--- John G. Rose [EMAIL PROTECTED] wrote:
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
The simulations can't loop because the simulator needs at least as much
memory
as the machine being simulated.
You're making assumptions when you say that. Outside of a particular
simulation we
--- Mike Tintner [EMAIL PROTECTED] wrote:
How do you resolve disagreements?
This is a problem for all large databases and multiuser AI systems. In my
design, messages are identified by source (not necessarily a person) and a
timestamp. The network economy rewards those sources that provide
Matt Mahoney wrote:
--- Mike Tintner [EMAIL PROTECTED] wrote:
My point was how do you test the *truth* of items of knowledge. Google tests
the *popularity* of items. Not the same thing at all. And it won't work.
It does work because the truth is popular. Look at prediction markets. Look
at
--- Ben Goertzel [EMAIL PROTECTED] wrote:
Of course what I imagine emerging from the Internet bears little
resemblance
to Novamente. It is simply too big to invest in directly, but it will
present
many opportunities.
But the emergence of superhuman AGI's like a Novamente may
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Perhaps you have not read my proposal at
http://www.mattmahoney.net/agi.html
or don't understand it.
Some of us have read it, and it has nothing whatsoever to do with
Artificial Intelligence. It is a labor-intensive
On Apr 9, 2008, at 12:33 PM, Derek Zahn wrote:
Matt Mahoney writes:
Just what do you want out of AGI? Something that thinks like a
person or
something that does what you ask it to?
The or is interesting. If it really thinks like a person and at
at least human level then I doubt very
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Perhaps you have not read my proposal at
http://www.mattmahoney.net/agi.html
or don't understand it.
Some of us have read it, and it has nothing whatsoever to do with
Artificial Intelligence. It is a
I asked: Imagine we have an AGI. What exactly does it do? What *should* it
do?
Note that I think I roughly understand Matt's vision for this: roughly, it is
google, and it will gradually get better at answering questions and taking
commands as more capable systems are linked in to the
Richard Loosemore: I am not sure I understand. There is every reason to
think that a currently-envisionable AGI would be millions of times smarter
than all of humanity put together. Simply build a human-level AGI, then get
it to bootstrap to a level of, say, a thousand times human speed
Derek Zahn wrote:
I asked:
Imagine we have an AGI. What exactly does it do? What *should* it do?
Note that I think I roughly understand Matt's vision for this: roughly,
it is google, and it will gradually get better at answering questions
and taking commands as more capable systems are
Derek Zahn wrote:
Richard Loosemore:
I am not sure I understand.
There is every reason to think that a currently-envisionable AGI would
be millions of times smarter than all of humanity put together.
Simply build a human-level AGI, then get it to bootstrap to a level of,
say, a
Samantha Atkins writes:
Beware the wish granting genie conundrum.
Yeah, you put it better than I did; I'm not asking what wishes we'd ask a genie
to grant, I'm wondering specifically what we want from the machines that Ben
and Richard and Matt and so on are thinking about and building.
Richard Loosemore: I am only saying that I see no particular limitations,
given the things that I know about how to buld an AGI. That is the best I can
do.
Sorry to flood everybody's mailbox today; I will make this my last message.
I'm not looking to impose a viewpoint on anybody; you have
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Just what do you want out of AGI? Something that thinks like a person or
something that does what you ask it to?
Either will do: your suggestion achieves neither.
If I ask your non-AGI the following question: How
15 matches
Mail list logo