-08 8:21 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Thought experiment on informationally limited
systems
On 04/03/2008, Mike Tintner [EMAIL PROTECTED] wrote:
David: I was specifically referring to your comment ending in BY
ITSELF.
Jeez, Will, the point of Artificial General
: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: March-03-08 8:48 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Thought experiment on informationally limited
systems
Will:Is generalising a skill logically the first thing that you need to
make an AGI? Nope, the means and sufficient architecture
Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: March-02-08 5:36 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Thought experiment on informationally limited
systems
Jeez, Will, the point of Artificial General Intelligence is that it can
start adapting to an unfamiliar
, no?
-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: March-02-08 5:36 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Thought experiment on informationally limited
systems
Jeez, Will, the point of Artificial General Intelligence is that it can
start adapting to an unfamiliar situation
Will:Is generalising a skill logically the first thing that you need to
make an AGI? Nope, the means and sufficient architecture to acquire
skills and competencies are more useful early on in an agi
development
Ah, you see, that's where I absolutely disagree, and a good part of why I'm
On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
On 2/28/08, William Pearson [EMAIL PROTECTED] wrote:
Note I want something different than computational universality. E.g.
Von Neumann architectures are generally programmable, Harvard
architectures aren't. As they can't be
On 28/02/2008, Mike Tintner [EMAIL PROTECTED] wrote:
You must first define its existing skills, then define the new challenge
with some degree of precision - then explain the principles by which it will
extend its skills. It's those principles of extension/generalization that
are the
Jeez, Will, the point of Artificial General Intelligence is that it can
start adapting to an unfamiliar situation and domain BY ITSELF. And your
FIRST and only response to the problem you set was to say: I'll get someone
to tell it what to do.
IOW you simply avoided the problem and thought
One thing I would expect from an AGI is that it least it would be able
to Google for something that might talk about how to do whatever it
needs and to have available library references on the subject. Being
able to follow and interpret written instructions takes a lot of
intelligence in
Yes of course an AGI will make mistakes - and sometimes fail - in adapting.
I say that v. explicitly.
But your other point also skirts the problem - which is that the AGI must
first identify what it needs to adapt to, before it can start
googling/asking for advice.
I think we need better to
On 02/03/2008, Mike Tintner [EMAIL PROTECTED] wrote:
Jeez, Will, the point of Artificial General Intelligence is that it can
start adapting to an unfamiliar situation and domain BY ITSELF. And your
FIRST and only response to the problem you set was to say: I'll get someone
to tell it what
- Original Message -
From: William Pearson [EMAIL PROTECTED]
To: agi@v2.listbox.com
Subject: Re: [agi] Thought experiment on informationally limited systems
Date: Sun, 2 Mar 2008 23:04:27 +
On 02/03/2008, Mike Tintner [EMAIL PROTECTED] wrote:
Jeez, Will, the point of Artificial General
On 2/28/08, William Pearson [EMAIL PROTECTED] wrote:
I'm going to try and elucidate my approach to building an intelligent
system, in a round about fashion. This is the problem I am trying to
solve.
Imagine you are designing a computer system to solve an unknown
problem, and you have these
I guess the first thing you would need for an Unknown Problem Solver
would be some way to determine usefulness. To be able to achieve
some goal the system may need measures of usefulness which span
intermediate stages towards the goal, or which are stacked in a
series.
If the system has no idea
On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
On 2/28/08, William Pearson [EMAIL PROTECTED] wrote:
I'm going to try and elucidate my approach to building an intelligent
system, in a round about fashion. This is the problem I am trying to
solve.
Imagine you are designing
On 2/28/08, William Pearson [EMAIL PROTECTED] wrote:
Note I want something different than computational universality. E.g.
Von Neumann architectures are generally programmable, Harvard
architectures aren't. As they can't be reprogrammed at run time.
It seems that you want to build the AGI from
WP: I'm going to try and elucidate my approach to building an intelligent
system, in a round about fashion. This is the problem I am trying to
solve.
Marks for at least trying to identify an AGI problem. I can't recall anyone
else doing so - which, to repeat, I think is appalling.
But I
On Thu, Feb 28, 2008 at 3:20 PM, William Pearson [EMAIL PROTECTED] wrote:
On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
Generally programmable, yes. But that's very broad. Many systems have
this
property.
Note I want something different than computational
--- William Pearson [EMAIL PROTECTED] wrote:
I'm going to try and elucidate my approach to building an intelligent
system, in a round about fashion. This is the problem I am trying to
solve.
Imagine you are designing a computer system to solve an unknown
problem, and you have these
19 matches
Mail list logo