On Sun, Aug 24, 2008 at 7:28 AM, Terren Suydam [EMAIL PROTECTED] wrote:
--- On Sat, 8/23/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
But you
can have an AI that has a bootstrapping mechanism that
tells it where
to look for goal content, tells it to absorb it and embrace
it.
Yes, but in
--- On Sun, 8/24/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
What do you mean by does not structure? What do
you mean by fully or
not fully embodied?
I've already discussed what I mean by embodiment in a previous post, the one
that immediately preceded the post you initially responded to.
Unfortunately, I didn't make a mistake last November after all..
However, I do believe that a solution is feasible although it's
solution may involve something like an advanced multiple-level
greatest common divisors theorem. The problem looks to me like it may
be related to the problem of a
On Sat, Aug 23, 2008 at 8:53 AM, Eric Burton [EMAIL PROTECTED] wrote:
Stupid fundamentalist troll garbage
List box has rules about using abusive language. And technically you
could be sued for that as well although I doubt if a case like that
could be won.
Jim Bromer
On Sun, Aug 24, 2008 at 5:51 PM, Terren Suydam [EMAIL PROTECTED] wrote:
Did you read CFAI? At least it dispels the mystique and
ridicule of
provable Friendliness and shows what kind of
things are relevant for
its implementation. You don't really want to fill the
universe with
paperclips,
Just a v. rough, first thought. An essential requirement of an AGI is
surely that it must be able to play - so how would you design a play
machine - a machine that can play around as a child does?
You can rewrite the brief as you choose, but my first thoughts are - it
should be able to play
Intolerance of another person's ideas through intimidation or ridicule
is intellectual repression. You won't elevate a discussion by
promoting a program anti-intellectual repression. Intolerance of a
person for his religious beliefs is a form of intellectual
intolerance. There are times when we
Valentina wrote:
Sorry if I'm commenting a little late to this: just read the thread. Here
is a question. I assume we all agree that intelligence can be defined as
the ability to achieve goals. My question concerns the establishment of
those goals. As human beings we move in a world of limitations
Eric Burton [EMAIL PROTECTED] wrote:
These have profound impacts on AGI design. First, AIXI is (provably) not
computable,
which means there is no easy shortcut to AGI. Second, universal intelligence
is not
computable because it requires testing in an infinite number of environments.
Since
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, August 24, 2008 2:46 PM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The
Necessity of Embodiment)
I have challenged this list as well as the singularity and SL4 lists
10 matches
Mail list logo