RE: [agi] Thought experiment on informationally limited systems

2008-03-04 Thread David Clark
-08 8:21 PM To: agi@v2.listbox.com Subject: Re: [agi] Thought experiment on informationally limited systems On 04/03/2008, Mike Tintner [EMAIL PROTECTED] wrote: David: I was specifically referring to your comment ending in BY ITSELF. Jeez, Will, the point of Artificial General

RE: [agi] Thought experiment on informationally limited systems

2008-03-04 Thread David Clark
: Mike Tintner [mailto:[EMAIL PROTECTED] Sent: March-03-08 8:48 PM To: agi@v2.listbox.com Subject: Re: [agi] Thought experiment on informationally limited systems Will:Is generalising a skill logically the first thing that you need to make an AGI? Nope, the means and sufficient architecture

RE: [agi] Thought experiment on informationally limited systems

2008-03-03 Thread David Clark
Message- From: Mike Tintner [mailto:[EMAIL PROTECTED] Sent: March-02-08 5:36 AM To: agi@v2.listbox.com Subject: Re: [agi] Thought experiment on informationally limited systems Jeez, Will, the point of Artificial General Intelligence is that it can start adapting to an unfamiliar

Re: [agi] Thought experiment on informationally limited systems

2008-03-03 Thread Mike Tintner
, no? -Original Message- From: Mike Tintner [mailto:[EMAIL PROTECTED] Sent: March-02-08 5:36 AM To: agi@v2.listbox.com Subject: Re: [agi] Thought experiment on informationally limited systems Jeez, Will, the point of Artificial General Intelligence is that it can start adapting to an unfamiliar situation

Re: [agi] Thought experiment on informationally limited systems

2008-03-03 Thread Mike Tintner
Will:Is generalising a skill logically the first thing that you need to make an AGI? Nope, the means and sufficient architecture to acquire skills and competencies are more useful early on in an agi development Ah, you see, that's where I absolutely disagree, and a good part of why I'm

Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread William Pearson
On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: On 2/28/08, William Pearson [EMAIL PROTECTED] wrote: Note I want something different than computational universality. E.g. Von Neumann architectures are generally programmable, Harvard architectures aren't. As they can't be

Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread William Pearson
On 28/02/2008, Mike Tintner [EMAIL PROTECTED] wrote: You must first define its existing skills, then define the new challenge with some degree of precision - then explain the principles by which it will extend its skills. It's those principles of extension/generalization that are the

Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread Mike Tintner
Jeez, Will, the point of Artificial General Intelligence is that it can start adapting to an unfamiliar situation and domain BY ITSELF. And your FIRST and only response to the problem you set was to say: I'll get someone to tell it what to do. IOW you simply avoided the problem and thought

Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread wannabe
One thing I would expect from an AGI is that it least it would be able to Google for something that might talk about how to do whatever it needs and to have available library references on the subject. Being able to follow and interpret written instructions takes a lot of intelligence in

Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread Mike Tintner
Yes of course an AGI will make mistakes - and sometimes fail - in adapting. I say that v. explicitly. But your other point also skirts the problem - which is that the AGI must first identify what it needs to adapt to, before it can start googling/asking for advice. I think we need better to

Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread William Pearson
On 02/03/2008, Mike Tintner [EMAIL PROTECTED] wrote: Jeez, Will, the point of Artificial General Intelligence is that it can start adapting to an unfamiliar situation and domain BY ITSELF. And your FIRST and only response to the problem you set was to say: I'll get someone to tell it what

Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread eldras
- Original Message - From: William Pearson [EMAIL PROTECTED] To: agi@v2.listbox.com Subject: Re: [agi] Thought experiment on informationally limited systems Date: Sun, 2 Mar 2008 23:04:27 + On 02/03/2008, Mike Tintner [EMAIL PROTECTED] wrote: Jeez, Will, the point of Artificial General

Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread YKY (Yan King Yin)
On 2/28/08, William Pearson [EMAIL PROTECTED] wrote: I'm going to try and elucidate my approach to building an intelligent system, in a round about fashion. This is the problem I am trying to solve. Imagine you are designing a computer system to solve an unknown problem, and you have these

Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread Bob Mottram
I guess the first thing you would need for an Unknown Problem Solver would be some way to determine usefulness. To be able to achieve some goal the system may need measures of usefulness which span intermediate stages towards the goal, or which are stacked in a series. If the system has no idea

Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread William Pearson
On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: On 2/28/08, William Pearson [EMAIL PROTECTED] wrote: I'm going to try and elucidate my approach to building an intelligent system, in a round about fashion. This is the problem I am trying to solve. Imagine you are designing

Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread YKY (Yan King Yin)
On 2/28/08, William Pearson [EMAIL PROTECTED] wrote: Note I want something different than computational universality. E.g. Von Neumann architectures are generally programmable, Harvard architectures aren't. As they can't be reprogrammed at run time. It seems that you want to build the AGI from

Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread Mike Tintner
WP: I'm going to try and elucidate my approach to building an intelligent system, in a round about fashion. This is the problem I am trying to solve. Marks for at least trying to identify an AGI problem. I can't recall anyone else doing so - which, to repeat, I think is appalling. But I

Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread Vladimir Nesov
On Thu, Feb 28, 2008 at 3:20 PM, William Pearson [EMAIL PROTECTED] wrote: On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: Generally programmable, yes. But that's very broad. Many systems have this property. Note I want something different than computational

Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread Matt Mahoney
--- William Pearson [EMAIL PROTECTED] wrote: I'm going to try and elucidate my approach to building an intelligent system, in a round about fashion. This is the problem I am trying to solve. Imagine you are designing a computer system to solve an unknown problem, and you have these