RE: [agi] Thought experiment on informationally limited systems

2008-03-04 Thread David Clark
Is an AGI necessarily a super human or is just the equivalent of a smart
human good enough?

I rarely find humans that adapt by itself to unfamiliar situations.  Take
a random person and place them in some remote location without adequate
training and they would last how long?  A week maybe.  I wouldn't define
this as adapting very well to an unfamiliar situation, would you?

Most humans (99.999% IMHO) don't ever create anything absolutely new, in
terms of human knowledge, so why would this be a criteria for an AGI?

I agree that an AGI must be able to learn.  I agree an AGI must be able to
reason and solve problems without just resorting to a stored lookup table.
BUT I don't agree that an AGI has to create itself when it is obvious humans
can't either.  Even though Ben believes in emergent intelligence, he has
always said that training by humans to at least some level is absolutely
necessary.

Even though I agree that generalizing is a very desirable quality for an
AGI, is this property necessary to creating an AGI?  Most people don't
generalize all that well in my opinion.

David Clark

 -Original Message-
 From: William Pearson [mailto:[EMAIL PROTECTED]
 Sent: March-03-08 8:21 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Thought experiment on informationally limited
 systems
 
 On 04/03/2008, Mike Tintner [EMAIL PROTECTED] wrote:
  David: I was specifically referring to your comment ending in BY
 ITSELF.
 
  
Jeez, Will, the point of Artificial General Intelligence is that
 it
can start adapting to an unfamiliar situation and domain BY
 ITSELF.
   
I believe this statement is just plain incorrect.
 
 
  David,
 
   I find that extraordinary, but I accept your sincerity. The
 definition I
   gave is an essential part of an AGI - if it can't adapt by itself
 sometimes
   to unfamiliar situations, (as humans can), and can only act on
 others'
   instructions, then it's a narrow AI. I wonder whether anyone else
 shares
   your view.
 
 There are a number of threads here that need disentangling. All these
 answers are my opinion only.
 
 Is a system that can adapt by itself to unfamiliar situations
 necessary for AGI? I would answer yes.
 
 Is it the only thing an AGI needs to be able to do? No. If I had a
 system that could build houses out of bricks, stones and straw it
 would not be an AGI if it could not be taught or learn cryptography.
 General for me means the ability to learn many different skills,
 including working on its own.
 
 Is generalising a skill logically the first thing that you need to
 make an AGI? Nope, the means and sufficient architecture to acquire
 skills and competencies are more useful early on in an agi
 development. I see generalising, in the way you talk about it, as a
 skill that can be acquired and improved upon. We certainly can change
 our ability to do so, through out our life time. If a skill can be
 changed and/or improved then something in the system must be changed,
 either data, program or something else. It is the manner and nature of
 these changes, very low level sub concious stuff (google neuro
 plasticity for what I am talking about in humans), that I think needs
 to be worked upon first. Else you are going to create a static and
 crystalline system.
 
  Will Pearson
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 724342
 Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


RE: [agi] Thought experiment on informationally limited systems

2008-03-04 Thread David Clark
YOU said teaching an AGI was cheating.

YOU want others to talk straight to your points and issues but you make it
look like I don't think much of generalizing in an AGI design.  The fact is
that generalizing is at the heart of my design and is a totally different
issue from training being cheating.

Ask the question simply:

Do the people on this list think that training is necessary for the creation
of an AGI and would they call training the AGI cheating?

You say training means cheating and to create an AGI you can't do that.  I
disagree.

David Clark

 -Original Message-
 From: Mike Tintner [mailto:[EMAIL PROTECTED]
 Sent: March-03-08 8:48 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Thought experiment on informationally limited
 systems
 
 Will:Is generalising a skill logically the first thing that you need to
 make an AGI? Nope, the means and sufficient architecture to acquire
 skills and competencies are more useful early on in an agi
 development
 
 Ah, you see, that's where I absolutely disagree, and a good part of why
 I'm
 hammering on the way I am. I don't think many (anyone?) will agree with
 David, but many if not everyone will agree with you.
 
 Yes, the problem of generalising is the very first thing you tackle,
 and
 should shape everything you do - at least once you have moved beyond
 idle
 thought to serious engagement.
 
 If you're trying to develop a new electric battery, you look for that
 new
 chemical first (assuming that's what you reckon you'll need) - you
 don't
 start looking at the casing or other aspects of the battery. Anything
 peripheral you do first may be rendered totally irrelevant later on
 when you
 do discover that chemical and a total waste of time.
 
 And such, I'm sure, is the case with AGI. That central problem of
 generalising demands a total new mentality - a sea-change of approach.
 
 (You saw an example in my exchange with YKY. I think - in fact, I'm
 just
 about totally certain - that generalising demands a system of open-
 ended
 concepts like ours. Because he isn't directly concerned with the
 generalising problem, he wants a closed-ended, unambiguous language -
 which
 is in fact only suitable for narrow AI and, I would argue, a waste of
 time).
 
 P.S. It's a bit sad - you started this thread with a generalising
 problem,
 now you're backtracking on it.
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 724342
 Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


RE: [agi] Thought experiment on informationally limited systems

2008-03-03 Thread David Clark
How intelligent would any human be if it couldn't be taught by other humans?

Could a human ever learn to speak by itself?  The few times this has
happened in real life, the person was permanently disabled and not capable
of becoming a normal human being.

If humans can't become human without the help of other humans, why should
this is a criteria for AGI?

David Clark

PS I am not suggesting that explicitly programming 100% of an AGI is either
doable or desirable but some degree of detailed teaching must be a
requirement for all on this list who dream of creating an AGI, no?

 -Original Message-
 From: Mike Tintner [mailto:[EMAIL PROTECTED]
 Sent: March-02-08 5:36 AM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Thought experiment on informationally limited
 systems
 
 Jeez, Will, the point of Artificial General Intelligence is that it can
 start adapting to an unfamiliar situation and domain BY ITSELF.  And
 your
 FIRST and only response to the problem you set was to say: I'll get
 someone
 to tell it what to do.
 
 IOW you simply avoided the problem and thought only of cheating. What a
 solution, or merest idea for a solution, must do is tell me how that
 intelligence will start adapting by itself  - will generalize from its
 existing skills to cross over domains.
 
 Then, as my answer indicated, it may well have to seek some
 instructions and
 advice - especially and almost certainly  if it wants to acquire a
 whole new
 major skill, as we do, by taking courses etc.
 
 But a general intelligence should be able to adapt to some unfamiliar
 situations entirely by itself - like perhaps your submersible
 situation. No
 guarantee that it will succeed in any given situation, (as there isn't
 with
 us), but you should be able to demonstrate its power to adapt
 sometimes.
 
 In a sense, you should be appalled with yourself that you didn't try to
 tackle the problem - to produce a cross-over idea. But since
 literally no
 one else in the field of AGI has the slightest cross-over idea - i.e.
 is
 actually tackling the problem of AGI, - and the whole culture is one of
 avoiding the problem, it's to be expected. (You disagree - show me one,
 just
 one, cross-over idea anywhere. Everyone will give you a v.
 detailed,impressive timetable for how long it'll take them to produce
 such
 an idea, they just will never produce one. Frankly, they're too
 scared).
 
 
 Mike Tintner [EMAIL PROTECTED] wrote:
 
   You must first define its existing skills, then define the new
 challenge
   with some degree of precision - then explain the principles by
 which it
  will
   extend its skills. It's those principles of
 extension/generalization
  that
   are the be-all and end-all, (and NOT btw, as you suggest, any
 helpful
  info
   that the robot will receive - that,sir, is cheating - it has to
 work
  these
   things out for itself - although perhaps it could *ask* for info).
 
 
  Why is that cheating? Would you never give instructions to a child
  about what to do? Taking instuctions is something that all
  intelligences need to be able to do, but it should be attempted to be
  minimised. I'm not saying it should take instructions unquestioningly
  either, ideally it should figure out whether the instructions you
 give
  are any use for it.
 
   Will Pearson
 
 
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 724342
 Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-03-03 Thread Mike Tintner

Yes, an AGI will have to be able to do narrow AI.

What you are doing here - and everyone is doing over and over and over - is 
saying: Yes, I know there's a hard part to AGI, but can I please 
concentrate on the easy parts - the narrow AI parts -  first?


If I give you a problem, I don't want to know whether you can take dictation 
and spell, I just want to know whether you can solve the problem - and not 
make excuses, or create distractions.


It's simple - do you have any ideas about the problem of AGI - ideas for 
generalizing skills (see below) -  cross-over ideas - or not?


David:


How intelligent would any human be if it couldn't be taught by other 
humans?


Could a human ever learn to speak by itself?  The few times this has
happened in real life, the person was permanently disabled and not capable
of becoming a normal human being.

If humans can't become human without the help of other humans, why should
this is a criteria for AGI?

David Clark

PS I am not suggesting that explicitly programming 100% of an AGI is 
either

doable or desirable but some degree of detailed teaching must be a
requirement for all on this list who dream of creating an AGI, no?


-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: March-02-08 5:36 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Thought experiment on informationally limited
systems

Jeez, Will, the point of Artificial General Intelligence is that it can
start adapting to an unfamiliar situation and domain BY ITSELF.  And
your
FIRST and only response to the problem you set was to say: I'll get
someone
to tell it what to do.

IOW you simply avoided the problem and thought only of cheating. What a
solution, or merest idea for a solution, must do is tell me how that
intelligence will start adapting by itself  - will generalize from its
existing skills to cross over domains.

Then, as my answer indicated, it may well have to seek some
instructions and
advice - especially and almost certainly  if it wants to acquire a
whole new
major skill, as we do, by taking courses etc.

But a general intelligence should be able to adapt to some unfamiliar
situations entirely by itself - like perhaps your submersible
situation. No
guarantee that it will succeed in any given situation, (as there isn't
with
us), but you should be able to demonstrate its power to adapt
sometimes.

In a sense, you should be appalled with yourself that you didn't try to
tackle the problem - to produce a cross-over idea. But since
literally no
one else in the field of AGI has the slightest cross-over idea - i.e.
is
actually tackling the problem of AGI, - and the whole culture is one of
avoiding the problem, it's to be expected. (You disagree - show me one,
just
one, cross-over idea anywhere. Everyone will give you a v.
detailed,impressive timetable for how long it'll take them to produce
such
an idea, they just will never produce one. Frankly, they're too
scared).


Mike Tintner [EMAIL PROTECTED] wrote:

  You must first define its existing skills, then define the new
challenge
  with some degree of precision - then explain the principles by
which it
 will
  extend its skills. It's those principles of
extension/generalization
 that
  are the be-all and end-all, (and NOT btw, as you suggest, any
helpful
 info
  that the robot will receive - that,sir, is cheating - it has to
work
 these
  things out for itself - although perhaps it could *ask* for info).


 Why is that cheating? Would you never give instructions to a child
 about what to do? Taking instuctions is something that all
 intelligences need to be able to do, but it should be attempted to be
 minimised. I'm not saying it should take instructions unquestioningly
 either, ideally it should figure out whether the instructions you
give
 are any use for it.

  Will Pearson




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
724342
Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.516 / Virus Database: 269.21.3/1308 - Release Date: 3/3/2008 
10:01 AM






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-03-03 Thread Mike Tintner

Will:Is generalising a skill logically the first thing that you need to
make an AGI? Nope, the means and sufficient architecture to acquire
skills and competencies are more useful early on in an agi
development

Ah, you see, that's where I absolutely disagree, and a good part of why I'm 
hammering on the way I am. I don't think many (anyone?) will agree with 
David, but many if not everyone will agree with you.


Yes, the problem of generalising is the very first thing you tackle, and 
should shape everything you do - at least once you have moved beyond idle 
thought to serious engagement.


If you're trying to develop a new electric battery, you look for that new 
chemical first (assuming that's what you reckon you'll need) - you don't 
start looking at the casing or other aspects of the battery. Anything 
peripheral you do first may be rendered totally irrelevant later on when you 
do discover that chemical and a total waste of time.


And such, I'm sure, is the case with AGI. That central problem of 
generalising demands a total new mentality - a sea-change of approach.


(You saw an example in my exchange with YKY. I think - in fact, I'm just 
about totally certain - that generalising demands a system of open-ended 
concepts like ours. Because he isn't directly concerned with the 
generalising problem, he wants a closed-ended, unambiguous language - which 
is in fact only suitable for narrow AI and, I would argue, a waste of time).


P.S. It's a bit sad - you started this thread with a generalising problem, 
now you're backtracking on it. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread William Pearson
On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:


 On 2/28/08, William Pearson [EMAIL PROTECTED] wrote:
  Note I want something different than computational universality. E.g.
  Von Neumann architectures are generally programmable, Harvard
  architectures aren't. As they can't be reprogrammed at run time.

 It seems that you want to build the AGI from the programming level.
 This is in contrast to John MacCarthy's declarative paradigm.  Your
 approach offers more flexibility (perhaps maximum flexibility), but may not
 make AGI easier to build.  Learning, in your case, is a matter of
 algorithmic learning.  It may be harder / less efficient than logic-based
 learning.


Algorithmic learning is hard. But just because the system is based upon
programs as its lowest level representation, does not mean that all learning
is going to be algorithmic learning. It is possible to have programs that
learn in any fashion within the system. If it makes sense in the system, you
could have a logic based learning program. Just that it will be in
competition with other learners to see which is the most useful for the
system.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread William Pearson
On 28/02/2008, Mike Tintner [EMAIL PROTECTED] wrote:

  You must first define its existing skills, then define the new challenge
  with some degree of precision - then explain the principles by which it will
  extend its skills. It's those principles of extension/generalization that
  are the be-all and end-all, (and NOT btw, as you suggest, any helpful info
  that the robot will receive - that,sir, is cheating - it has to work these
  things out for itself - although perhaps it could *ask* for info).


Why is that cheating? Would you never give instructions to a child
about what to do? Taking instuctions is something that all
intelligences need to be able to do, but it should be attempted to be
minimised. I'm not saying it should take instructions unquestioningly
either, ideally it should figure out whether the instructions you give
are any use for it.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread Mike Tintner
Jeez, Will, the point of Artificial General Intelligence is that it can 
start adapting to an unfamiliar situation and domain BY ITSELF.  And your 
FIRST and only response to the problem you set was to say: I'll get someone 
to tell it what to do.


IOW you simply avoided the problem and thought only of cheating. What a 
solution, or merest idea for a solution, must do is tell me how that 
intelligence will start adapting by itself  - will generalize from its 
existing skills to cross over domains.


Then, as my answer indicated, it may well have to seek some instructions and 
advice - especially and almost certainly  if it wants to acquire a whole new 
major skill, as we do, by taking courses etc.


But a general intelligence should be able to adapt to some unfamiliar 
situations entirely by itself - like perhaps your submersible situation. No 
guarantee that it will succeed in any given situation, (as there isn't with 
us), but you should be able to demonstrate its power to adapt sometimes.


In a sense, you should be appalled with yourself that you didn't try to 
tackle the problem - to produce a cross-over idea. But since literally no 
one else in the field of AGI has the slightest cross-over idea - i.e. is 
actually tackling the problem of AGI, - and the whole culture is one of 
avoiding the problem, it's to be expected. (You disagree - show me one, just 
one, cross-over idea anywhere. Everyone will give you a v. 
detailed,impressive timetable for how long it'll take them to produce such 
an idea, they just will never produce one. Frankly, they're too scared).



Mike Tintner [EMAIL PROTECTED] wrote:



 You must first define its existing skills, then define the new challenge
 with some degree of precision - then explain the principles by which it 
will
 extend its skills. It's those principles of extension/generalization 
that
 are the be-all and end-all, (and NOT btw, as you suggest, any helpful 
info
 that the robot will receive - that,sir, is cheating - it has to work 
these

 things out for itself - although perhaps it could *ask* for info).



Why is that cheating? Would you never give instructions to a child
about what to do? Taking instuctions is something that all
intelligences need to be able to do, but it should be attempted to be
minimised. I'm not saying it should take instructions unquestioningly
either, ideally it should figure out whether the instructions you give
are any use for it.

 Will Pearson





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread wannabe
One thing I would expect from an AGI is that it least it would be able  
to Google for something that might talk about how to do whatever it  
needs and to have available library references on the subject.  Being  
able to follow and interpret written instructions takes a lot of  
intelligence in itself.  And a lot of times there are important  
conventions about how to do certain things and it is a bad idea to  
just do things completely your own way.


Of course, we do expect an intelligent agent to often be able to  
figure things out on its own.  But you have to remember that when you  
allow for this, there will sometimes be mistakes.  It is a necessary  
consequence of trying something new that it won't always work.  And I  
am afraid that people have an unrealistic expectation that the AGI  
will be able to do something new without ever getting it wrong.  I'm  
expecting there will be a lot of pain and disappointment when it  
doesn't work this way.  Because it can't work that way.  An AGI  
working in unknown territory will have to make mistakes.

Andi


Quoting Mike Tintner [EMAIL PROTECTED]:


Jeez, Will, the point of Artificial General Intelligence is that it can
start adapting to an unfamiliar situation and domain BY ITSELF.  And
your FIRST and only response to the problem you set was to say: I'll
get someone to tell it what to do.

IOW you simply avoided the problem and thought only of cheating. What a
solution, or merest idea for a solution, must do is tell me how that
intelligence will start adapting by itself  - will generalize from its
existing skills to cross over domains.

Then, as my answer indicated, it may well have to seek some
instructions and advice - especially and almost certainly  if it wants
to acquire a whole new major skill, as we do, by taking courses etc.

But a general intelligence should be able to adapt to some unfamiliar
situations entirely by itself - like perhaps your submersible
situation. No guarantee that it will succeed in any given situation,
(as there isn't with us), but you should be able to demonstrate its
power to adapt sometimes.

In a sense, you should be appalled with yourself that you didn't try to
tackle the problem - to produce a cross-over idea. But since
literally no one else in the field of AGI has the slightest
cross-over idea - i.e. is actually tackling the problem of AGI, - and
the whole culture is one of avoiding the problem, it's to be expected.
(You disagree - show me one, just one, cross-over idea anywhere.
Everyone will give you a v. detailed,impressive timetable for how long
it'll take them to produce such an idea, they just will never produce
one. Frankly, they're too scared).


Mike Tintner [EMAIL PROTECTED] wrote:



You must first define its existing skills, then define the new challenge
with some degree of precision - then explain the principles by   
which it will

extend its skills. It's those principles of extension/generalization that
are the be-all and end-all, (and NOT btw, as you suggest, any helpful info
that the robot will receive - that,sir, is cheating - it has to work these
things out for itself - although perhaps it could *ask* for info).



Why is that cheating? Would you never give instructions to a child
about what to do? Taking instuctions is something that all
intelligences need to be able to do, but it should be attempted to be
minimised. I'm not saying it should take instructions unquestioningly
either, ideally it should figure out whether the instructions you give
are any use for it.

Will Pearson




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread Mike Tintner
Yes of course an AGI will make mistakes - and sometimes fail - in adapting. 
I say that v. explicitly.


But your other point also skirts the problem - which is that the AGI must 
first identify what it needs to adapt to, before it can start 
googling/asking for advice.


I think we need better to focus the problem. I gave I think a good example. 
How is a system going to build a wall if the only materials it knows to 
build a wall with - bricks - are unavailable?


You might care to focus the submersible problem. Let's say it is that the 
submersible finds it cannot rise - it just won't go upwards. I'm not v. 
mechanical so you guys can perhaps flesh it out. Something is preventing it 
from rising, but all the obvious things are functioning OK - define what it 
knows, A-X,  define what the mysterious problem is, Z, and then you have a 
true AGI problem - how does it generalize fromA-X or any other knowledge to 
Z (a v. different domain)?  Z, for example, might be a squid (unless it 
already knows about squids).


A good deal of imagination has to go into just defining AGI problems - you 
have to spend a good deal of time on it.


Andi:
One thing I would expect from an AGI is that it least it would be able
to Google for something that might talk about how to do whatever it
needs and to have available library references on the subject.  Being
able to follow and interpret written instructions takes a lot of
intelligence in itself.  And a lot of times there are important
conventions about how to do certain things and it is a bad idea to
just do things completely your own way.

Of course, we do expect an intelligent agent to often be able to
figure things out on its own.  But you have to remember that when you
allow for this, there will sometimes be mistakes.  It is a necessary
consequence of trying something new that it won't always work.  And I
am afraid that people have an unrealistic expectation that the AGI
will be able to do something new without ever getting it wrong.  I'm
expecting there will be a lot of pain and disappointment when it
doesn't work this way.  Because it can't work that way.  An AGI
working in unknown territory will have to make mistakes.
Andi


Quoting Mike Tintner [EMAIL PROTECTED]:


Jeez, Will, the point of Artificial General Intelligence is that it can
start adapting to an unfamiliar situation and domain BY ITSELF.  And
your FIRST and only response to the problem you set was to say: I'll
get someone to tell it what to do.

IOW you simply avoided the problem and thought only of cheating. What a
solution, or merest idea for a solution, must do is tell me how that
intelligence will start adapting by itself  - will generalize from its
existing skills to cross over domains.

Then, as my answer indicated, it may well have to seek some
instructions and advice - especially and almost certainly  if it wants
to acquire a whole new major skill, as we do, by taking courses etc.

But a general intelligence should be able to adapt to some unfamiliar
situations entirely by itself - like perhaps your submersible
situation. No guarantee that it will succeed in any given situation,
(as there isn't with us), but you should be able to demonstrate its
power to adapt sometimes.

In a sense, you should be appalled with yourself that you didn't try to
tackle the problem - to produce a cross-over idea. But since
literally no one else in the field of AGI has the slightest
cross-over idea - i.e. is actually tackling the problem of AGI, - and
the whole culture is one of avoiding the problem, it's to be expected.
(You disagree - show me one, just one, cross-over idea anywhere.
Everyone will give you a v. detailed,impressive timetable for how long
it'll take them to produce such an idea, they just will never produce
one. Frankly, they're too scared).


Mike Tintner [EMAIL PROTECTED] wrote:



You must first define its existing skills, then define the new challenge
with some degree of precision - then explain the principles by   which 
it will
extend its skills. It's those principles of extension/generalization 
that
are the be-all and end-all, (and NOT btw, as you suggest, any helpful 
info
that the robot will receive - that,sir, is cheating - it has to work 
these

things out for itself - although perhaps it could *ask* for info).



Why is that cheating? Would you never give instructions to a child
about what to do? Taking instuctions is something that all
intelligences need to be able to do, but it should be attempted to be
minimised. I'm not saying it should take instructions unquestioningly
either, ideally it should figure out whether the instructions you give
are any use for it.

Will Pearson




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com



--
No virus found in this incoming message.

Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread William Pearson
On 02/03/2008, Mike Tintner [EMAIL PROTECTED] wrote:
 Jeez, Will, the point of Artificial General Intelligence is that it can
  start adapting to an unfamiliar situation and domain BY ITSELF.  And your
  FIRST and only response to the problem you set was to say: I'll get someone
  to tell it what to do.

Nothing we ever do is by ourselves, entirely, we have a wealth of
examples to draw from that we have acquired from family/friends and
teachers. The situation I described was like throwing a baby into a
completely unfamiliar problem, without the wealth of experience we
have built up over the years, so some hand holding is to be expected.
Also I'm not planning to have a full AI made any time soon, I'm merely
laying the ground work, for many other people to work upon. I may get
animal level adaptivity/intelligence myself, it depends how quickly I
can build the first layer and the tools I need for the next.

This is also why I concentrate on the most flexible system possible, I
do not wish to constrain the system to do any more than needs be done
to achieve my current goal. This goal is to add a way of selecting
between the programs within a computer system dependent upon what the
system needs to do.

It is more fundamental than your cross-over idea, in that it is a
lower level phenomenon, but not in the sense it is more important for
acting intelligently.

  IOW you simply avoided the problem and thought only of cheating. What a
  solution, or merest idea for a solution, must do is tell me how that
  intelligence will start adapting by itself  - will generalize from its
  existing skills to cross over domains.

I'm not building the solution, merely a framework which I think will
enable people to build the solution. I think this needs to be done
first, in essence I am trying to deal with the develop and acquire
skills problem.

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-03-02 Thread eldras
Although top down should continue being researched  tried, the complexity is 
still monumental.

We KNOW that bottom up delivers AGI, and Turing's view was that heuristics are 
enough to build it.

That is only  doable at mass speeds assumed possible in eg quantum computing.

eldras 
 - Original Message -
 From: William Pearson [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Subject: Re: [agi] Thought experiment on informationally limited systems
 Date: Sun, 2 Mar 2008 23:04:27 +
 
 
 On 02/03/2008, Mike Tintner [EMAIL PROTECTED] wrote:
  Jeez, Will, the point of Artificial General Intelligence is that it can
   start adapting to an unfamiliar situation and domain BY ITSELF.  And your
   FIRST and only response to the problem you set was to say: I'll get 
  someone
   to tell it what to do.
 
 Nothing we ever do is by ourselves, entirely, we have a wealth of
 examples to draw from that we have acquired from family/friends and
 teachers. The situation I described was like throwing a baby into a
 completely unfamiliar problem, without the wealth of experience we
 have built up over the years, so some hand holding is to be expected.
 Also I'm not planning to have a full AI made any time soon, I'm merely
 laying the ground work, for many other people to work upon. I may get
 animal level adaptivity/intelligence myself, it depends how quickly I
 can build the first layer and the tools I need for the next.
 
 This is also why I concentrate on the most flexible system possible, I
 do not wish to constrain the system to do any more than needs be done
 to achieve my current goal. This goal is to add a way of selecting
 between the programs within a computer system dependent upon what the
 system needs to do.
 
 It is more fundamental than your cross-over idea, in that it is a
 lower level phenomenon, but not in the sense it is more important for
 acting intelligently.
 
   IOW you simply avoided the problem and thought only of cheating. What a
   solution, or merest idea for a solution, must do is tell me how that
   intelligence will start adapting by itself  - will generalize from its
   existing skills to cross over domains.
 
 I'm not building the solution, merely a framework which I think will
 enable people to build the solution. I think this needs to be done
 first, in essence I am trying to deal with the develop and acquire
 skills problem.
 
Will Pearson
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: 
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Want an e-mail address like mine?
Get a free e-mail account today at www.mail.com!

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread YKY (Yan King Yin)
On 2/28/08, William Pearson [EMAIL PROTECTED] wrote:
 I'm going to try and elucidate my approach to building an intelligent
 system, in a round about fashion. This is the problem I am trying to
 solve.

 Imagine you are designing a computer system to solve an unknown
 problem, and you have these constraints

 A) Limited space to put general information about the world
 B) Communication with the system after it has been deployed. The less
 the better.
 C) We shall also assume limited processing ability etc

 The goal is to create a system that can solve the tasks as quickly as
 possible with the least interference from the outside.

 I'd like people to write a brief sketch of your solution to this sort
 of problem down. Is it different from your AGI designs, if so why?

Space/time-optimality is not my top concern.  I'm focused on building an AGI
that *works*, within reasonable space/time.  If you add these contraints,
you're making the AGI problem harder than it already is.  Ditto for the
amount of user interaction.  Why make it harder?

 System Sketch? - It would have to be generally programmable, I would
 want to be able to send it arbitrary programs after it had been
 created, so I could send it a program to decrypt things or control
 things. It would also need to able to generate it's own programming
 and select between the different programs in order to minimise my need
 to program it. It is not different to my AGI design, unsurprisingly.


Generally programmable, yes.  But that's very broad.  Many systems have this
property.  Even system with only a declarative KB can re-program itself by
modifying the KB.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread Bob Mottram
I guess the first thing you would need for an Unknown Problem Solver
would be some way to determine usefulness.  To be able to achieve
some goal the system may need measures of usefulness which span
intermediate stages towards the goal, or which are stacked in a
series.

If the system has no idea of usefulness and no explicit goals then
probably the best it can do is become an error corrector - i.e. look
for regular patterns of activity, then find anomalies and try to take
actions which correct those anomalies and restore the expected
pattern.



On 28/02/2008, William Pearson [EMAIL PROTECTED] wrote:
 I'm going to try and elucidate my approach to building an intelligent
  system, in a round about fashion. This is the problem I am trying to
  solve.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread William Pearson
On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:


 On 2/28/08, William Pearson [EMAIL PROTECTED] wrote:
  I'm going to try and elucidate my approach to building an intelligent
   system, in a round about fashion. This is the problem I am trying to
  solve.
 
  Imagine you are designing a computer system to solve an unknown
  problem, and you have these constraints
  
  A) Limited space to put general information about the world
  B) Communication with the system after it has been deployed. The less
  the better.
  C) We shall also assume limited processing ability etc
  
  The goal is to create a system that can solve the tasks as quickly as
  possible with the least interference from the outside.
 
  I'd like people to write a brief sketch of your solution to this sort
   of problem down. Is it different from your AGI designs, if so why?


 Space/time-optimality is not my top concern.  I'm focused on building an AGI
 that *works*, within reasonable space/time.  If you add these contraints, 
 you're  making the AGI problem harder than it already is.  Ditto for the 
 amount of user
 interaction.  Why make it harder?

I'm not looking for optimality, just that better is important. I don't
want to have to hold the hand of my system teaching it laboriously, so
the less information I have to feed it the better. Why ignore the
problem and make the job of teaching it harder?

Also we have limited space and time in the real world

  System Sketch? - It would have to be generally programmable, I would
  want to be able to send it arbitrary programs after it had been
  created, so I could send it a program to decrypt things or control
   things. It would also need to able to generate it's own programming
  and select between the different programs in order to minimise my need
  to program it. It is not different to my AGI design, unsurprisingly.


 Generally programmable, yes.  But that's very broad.  Many systems have this
 property.

Note I want something different than computational universality. E.g.
Von Neumann architectures are generally programmable, Harvard
architectures aren't. As they can't be reprogrammed at run time.

http://en.wikipedia.org/wiki/Harvard_architecture


 Even system with only a declarative KB can re-program itself by modifying the
 KB.

So a program could get in and remove all the items from the KB? You
can have viruses etc inside the KB?

 Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread YKY (Yan King Yin)
On 2/28/08, William Pearson [EMAIL PROTECTED] wrote:
 Note I want something different than computational universality. E.g.
 Von Neumann architectures are generally programmable, Harvard
 architectures aren't. As they can't be reprogrammed at run time.

It seems that you want to build the AGI from the programming level.  This
is in contrast to John MacCarthy's declarative paradigm.  Your approach
offers more flexibility (perhaps maximum flexibility), but may not make AGI
easier to build.  Learning, in your case, is a matter of algorithmic
learning.  It may be harder / less efficient than logic-based learning.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread Mike Tintner


WP:  I'm going to try and elucidate my approach to building an intelligent

system, in a round about fashion. This is the problem I am trying to
solve.


Marks for at least trying to identify an AGI problem. I can't recall anyone 
else doing so - which, to repeat, I think is appalling.


But I don't think you're doing it adequately, with your example of a 
submersible. Essentially you're providing a variation on what I've already 
mentioned - the ICRA Robot Challenge.


http://icra.wustl.edu/

Their lunar mission camp situation - how would you program a robot to deal 
with any unexpected emergency that could arise in a limited camp area - any 
malfunctioning equipment, for example - does strike me as a good AGI 
problem. Your robot will have certain skills: how will it adapt those skills 
to meet new problems for which they are useful but, overall inadequate?


You must first define its existing skills, then define the new challenge 
with some degree of precision - then explain the principles by which it will 
extend its skills. It's those principles of extension/generalization that 
are the be-all and end-all, (and NOT btw, as you suggest, any helpful info 
that the robot will receive - that,sir, is cheating - it has to work these 
things out for itself - although perhaps it could *ask* for info).


Anyway, nice to see someone talking about the central problem of AGI at 
last.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread Vladimir Nesov
On Thu, Feb 28, 2008 at 3:20 PM, William Pearson [EMAIL PROTECTED] wrote:
 On 28/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
  
   Generally programmable, yes.  But that's very broad.  Many systems have 
 this
   property.

  Note I want something different than computational universality. E.g.
  Von Neumann architectures are generally programmable, Harvard
  architectures aren't. As they can't be reprogrammed at run time.

  http://en.wikipedia.org/wiki/Harvard_architecture


I agree with YKY, it's not a very useful specification. Turing machine
is not necessarily a way either.

I think that ability to learn structure-less production rules is
sufficient. These allow system to implement finite state machines
internally, and these state machines operate on data streams
consisting of external I/O and states of other state machines.

This organizational principle follows naturally from blackboard-like
system where most of the facts on the blackboard are labels given to
statistical regularities detected in previous moments.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Thought experiment on informationally limited systems

2008-02-28 Thread Matt Mahoney

--- William Pearson [EMAIL PROTECTED] wrote:

 I'm going to try and elucidate my approach to building an intelligent
 system, in a round about fashion. This is the problem I am trying to
 solve.
 
 Imagine you are designing a computer system to solve an unknown
 problem, and you have these constraints
 
 A) Limited space to put general information about the world
 B) Communication with the system after it has been deployed. The less
 the better.
 C) We shall also assume limited processing ability etc
 
 The goal is to create a system that can solve the tasks as quickly as
 possible with the least interference from the outside.
 
 I'd like people to write a brief sketch of your solution to this sort
 of problem down. Is it different from your AGI designs, if so why?

The general problem is not computable, like AIXI or compression.  So I have to
make a guess as to what unknown problems the AGI would be asked to solve.  My
guess would be problems that have economic value.  So I would look at the kind
of tasks that people are being paid to solve, and design the AGI to solve the
same kind of problems.  I would tell the AGI what I want it to accomplish, and
it would research the internet to find a good solution.

To do that the AGI will first need natural language capability, followed by
vision, hearing, and mobility depending on the range of tasks.  It will need
vastly more computing power than the average PC in any case.



-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com