@v2.listbox.com
Sent: Tuesday, April 24, 2007 2:48 AM
Subject: RE: [agi] How should an AGI ponder about mathematics
1. They will probably create more problems than they fix... as usual. But
they should be able to assist man with his issues. Kind of like machines
did.
2. You would have
The more problematic issue is what happens if you non-destructively up-load
your mind? What do you do with the original which still considers itself you?
The up-load also considers itself you and may suggest a bullet.
Matt Mahoney [EMAIL PROTECTED] wrote:
--- John G. Rose wrote:
A baby AGI
On Tue, Apr 24, 2007 at 07:09:22AM -0700, Eric B. Ramsay wrote:
The more problematic issue is what happens if you non-destructively
up-load your mind? What do you do with the original which still
It's a theoretical problem for any of us on this list. Nondestructive
scans require medical
Your twin example is not a good choice. The upload will consider itself to have
a claim on the contents of your life - financial resources for example.
Eugen Leitl [EMAIL PROTECTED] wrote: On Tue, Apr 24, 2007 at 07:09:22AM
-0700, Eric B. Ramsay wrote:
The more problematic issue is what
--- Lukasz Stafiniak [EMAIL PROTECTED] wrote:
On 4/23/07, Matt Mahoney [EMAIL PROTECTED] wrote:
Ontic looks like an interesting and elegant formalism, but I don't see how
it
would help an AGI learn mathematics. We are not yet at the point where we
can
solve word problems like if I pay
Hi,
Adding some thoughts on AGI math - If the AGI or a sub processor of the AGI
is allotted time to sleep or idle process it could lazily postulate and
construct theorems with spare CPU cycles (cores are cheap nowadays), put
things together and use those theorems to further test the processing of
On Monday 23 April 2007 10:03, Matt Mahoney wrote:
... The brain is a billion times slower per step, has only about 7
words of short term memory, ...
For some appropriate meaning of word -- I'd suggest that frame might be
more useful in thinking about what's going on. One of Miller's magical
On 4/23/07, John G. Rose [EMAIL PROTECTED] wrote:
Hi,
Adding some thoughts on AGI math - If the AGI or a sub processor of the AGI
is allotted time to sleep or idle process it could lazily postulate and
construct theorems with spare CPU cycles (cores are cheap nowadays), put
things together and
On 4/23/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
We really are pigs in space when it comes to discrete symbol manipulation such
as arithmetic or logic. It's actually harder (mentally) to do a
multiplication step such as 8*7=56 than to catch a Frisbee -- and I claim
I've learnt
--- Lukasz Stafiniak [EMAIL PROTECTED] wrote:
Perhaps CIC is simply too impractical.
Probably. Deriving multiplication from zero and S() is like computing m*n
using:
for (i=0; im; ++i)
for (j=0; jn; ++j)
++answer;
We don't expect children to derive arithmetic from axioms. We
On Monday 23 April 2007 15:40, Lukasz Stafiniak wrote:
... An AGI working with bigger numbers had better discovered binary
numbers. Could an AGI do it? Could it discover rational numbers? (It
would initially believe that irrational numbers do not exist, as early
Pythagoreans have believed.)
On Apr 23, 2007, at 2:05 PM, J. Storrs Hall, PhD. wrote:
On Monday 23 April 2007 15:40, Lukasz Stafiniak wrote:
... An AGI working with bigger numbers had better discovered binary
numbers. Could an AGI do it? Could it discover rational numbers? (It
would initially believe that irrational
A baby AGI has immense advantage. It's starting (life?) after billions of
years of evolution and thousands of years of civilization. A 5 YO child
can't float all languages, all science, all mathematics, all recorded
history, all encyclopedia, etc. in sub-millisecond RAM and be able to
John: Our brains are good I mean they are
us but aren't they just biological blobs of goop that are half-assed
excuses
for intelligence? I mean why are AGI's coming about anyway? Is it
because
our brains are awesome and fulfill all of our needs? No. We need to be
uploaded otherwise we
--- John G. Rose [EMAIL PROTECTED] wrote:
A baby AGI has immense advantage. It's starting (life?) after billions of
years of evolution and thousands of years of civilization. A 5 YO child
can't float all languages, all science, all mathematics, all recorded
history, all encyclopedia, etc.
He who refuses to do arithmetic is doomed to talk nonsense.
- John McCarthy
We're talking about relative numbers here. Suppose you had an AI algorithm
that was exactly as good as the one the human brain uses. In fact, let's
suppose you had one that was two orders of magnitude better,
Hmmm. Design a combinational logic circuit that has inputs a, b, and c, and
outputs not(a), not(b), and not(c) -- its function is just 3 paralleled
inverters. But, while you may use as many AND and OR gates as you like, you
may only use at most two NOT gates.
Josh
On Monday 23 April 2007
On Monday 23 April 2007 19:45, Matt Mahoney wrote:
... How do you distinguish between consciousness (sense of self) and the
programmed belief in consciousness, free will, and fear of death that all
animals possess because it confers a survival advantage?
A distinction without a difference, I
1. They will probably create more problems than they fix... as usual. But
they should be able to assist man with his issues. Kind of like machines
did.
2. You would have to imagine an ideal pure intelligence and bridge the gap
somewhat.
1.What are your AGI's going to do with their
From biological conception to old age the mind changes quite a bit already.
Consciousness, sense of self, free will - all illusions. Fear of death - if
the mind agent lost it perhaps it would choose to terminate unless something
else supported its intent to keep running
From: Matt
Look also at Ontic:
http://lambda-the-ultimate.org/classic/message6641.html
http://ttic.uchicago.edu/%7Edmcallester/ontic-spec.ps
http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/kr/systems/ontic/0.html
http://citeseer.ist.psu.edu/witty95ontic.html
Josh
On Saturday 21 April 2007
Ontic looks like an interesting and elegant formalism, but I don't see how it
would help an AGI learn mathematics. We are not yet at the point where we can
solve word problems like if I pay for a $4.95 item with a $10 bill, how much
change should I get back? Never mind the harder problem of
On 4/23/07, Matt Mahoney [EMAIL PROTECTED] wrote:
Ontic looks like an interesting and elegant formalism, but I don't see how it
would help an AGI learn mathematics. We are not yet at the point where we can
solve word problems like if I pay for a $4.95 item with a $10 bill, how much
change
--- Lukasz Stafiniak [EMAIL PROTECTED] wrote:
How should an AGI think about formal mathematical ideas?
I think the hard problem is in learning how to apply it. For example, suppose
you say to an AGI, Bob and Alice shared a $100 prize. How much did Bob
receive? Mathematically, it is simple,
Well Matt, there's not only one hard problem!
NL understanding is hard, but theorem-proving is hard too, and
narrow-AI approaches have not succeeded at proving nontrivial theorems
except in very constrained domains...
I happen to think that both can be solved by the same sort of
architecture,
On 4/22/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Well Matt, there's not only one hard problem!
NL understanding is hard, but theorem-proving is hard too, and
narrow-AI approaches have not succeeded at proving nontrivial theorems
except in very constrained domains...
Verification of
On 4/22/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Well Matt, there's not only one hard problem!
NL understanding is hard, but theorem-proving is hard too, and
narrow-AI approaches have not succeeded at proving nontrivial theorems
except in very constrained domains...
I happen to think
27 matches
Mail list logo