Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-23 Thread J Storrs Hall, PhD
On Tuesday 22 April 2008 01:22:14 pm, Richard Loosemore wrote:

 The solar system, for example, is not complex:  the planets move in 
 wonderfully predictable orbits.

http://space.newscientist.com/article/dn13757-solar-system-could-go-haywire-before-the-sun-dies.html?feedId=online-news_rss20

How will life on Earth end? The answer, of course, is unknown, but two new 
studies suggest a collision with Mercury or Mars could doom life long before 
the Sun swells into a red giant and bakes the planet to a crisp in about 5 
billion years.
The studies suggest that the solar system's planets will continue to orbit 
the Sun stably for at least 40 million years. But after that, they show there 
is a small but not insignificant chance that things could go terribly awry.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-23 Thread Richard Loosemore

J Storrs Hall, PhD wrote:

On Tuesday 22 April 2008 01:22:14 pm, Richard Loosemore wrote:

The solar system, for example, is not complex:  the planets move in 
wonderfully predictable orbits.


http://space.newscientist.com/article/dn13757-solar-system-could-go-haywire-before-the-sun-dies.html?feedId=online-news_rss20

How will life on Earth end? The answer, of course, is unknown, but two new 
studies suggest a collision with Mercury or Mars could doom life long before 
the Sun swells into a red giant and bakes the planet to a crisp in about 5 
billion years.
The studies suggest that the solar system's planets will continue to orbit 
the Sun stably for at least 40 million years. But after that, they show there 
is a small but not insignificant chance that things could go terribly awry.


I am confused about the intended message.

If you take the above quote from me in its original context, your 
illustration perfectly supports what I said, but with that one paragraph 
taken out of context it looks as if you are trying to contradict it.



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-22 Thread Richard Loosemore

Vladimir Nesov wrote:

On Tue, Apr 22, 2008 at 5:59 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

 H I detect a parody..?

 That is not what I intended to say.



No, as horrible as it may sound, this is how I see the problem that
you are trying to address. If you can pinpoint some specific errors in
my description, without reiterating the whole description once again,
that would probably be helpful.



On a second reading, the description of my propsoed paradigm is not that 
inaccurate, it just emphasizes some things and de-emphasizes others, 
thereby making the whole thing look weird.


I'll elaborate later.



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-22 Thread Derek Zahn
J Andrew Rogers writes: Most arguments and disagreements over complexity are 
fundamentally  about the strict definition of the term, or the complete 
absence  thereof. The arguments tend to evaporate if everyone is forced to  
unambiguously define such terms, but where is the fun in that.
I agree with this to a point at least.  My attempt to rephrase Richard's 
argument falters because I have not yet understood his use of the term 
'complexity'.  I'd prefer a rigorous definition but will settle for a better 
general understanding of what he means.  Despite his several attempts to 
describe his meaning I have not been able yet to successfully grasp exactly 
what counts as complex and what does not, and for things inbetween, how to 
judge the degree of complexity.
 
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-22 Thread Richard Loosemore

J. Andrew Rogers wrote:


On Apr 21, 2008, at 6:53 PM, Richard Loosemore wrote:
I have been trying to understand the relationship between theoretical 
models of thought (both natural and artificial) since at least 1980, 
and one thing I have noticed is that people devise theoretical 
structures that are based on the assumption that intelligence is not 
complex  but then they use these structures in such a way that the 
resulting system is almost always complex.



This is easily explained by the obvious fact that the definition of 
complex varies considerably across relevant populations, exacerbated 
in the case of AGI -- where it is arguably a germane element -- because 
many (most?) researchers are using complex in a colloquial (read: 
meaningless) sense rather than one of its more rigorously defined 
senses, of which there are a few interesting ones.


Most arguments and disagreements over complexity are fundamentally 
about the strict definition of the term, or the complete absence 
thereof.  The arguments tend to evaporate if everyone is forced to 
unambiguously define such terms, but where is the fun in that.


It is correct to say that there is disagreement about what complexity 
means, but that is why I went to so much trouble to give a precise 
definition of it, and the use that precise definition consistently.


Last thing I want to do is to engage in fruitless debates with other 
complex systems people about what exactly it means.


But then, going back to your first comment above, no, you cannot use 
other people's confusion about the meaning of the term complexity to 
explain why models of thinking start off being designed as if they were 
not complex, but then get used in ways that makes the overall system 
complex.  That observation is pretty much independent of the definition 
you choose, and any way it happens within my definition, so it still 
needs to be explained.


The explanation, of course, is that intelligent systems really are 
(partially) complex, but everyone is trying to kid themselves that they 
are not, to make their research easier.



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-22 Thread Derek Zahn
Richard:  I get tripped up on your definition of complexity:
 
 A system contains a certain amount of complexity in it if it 
 has some regularities in its overall behavior that are governed 
 by mechanisms that are so tangled that, for all practical purposes, 
 we must assume that we will never be able to find a closed-form 
 explanation of how the global arises from the local.on figuring out what 
 counts as a regularity in overall behavior.  Consider a craps table.  The 
 trajectories of the dice would seem to have global regularities (for which 
 craps players and normal people have words and phrases, like bouncing off 
 the back, flying off the table, or whatever).  Our ability to create 
 concepts around this activity would seem to imply the existence of global 
 regularities (finding them is what we do when we make concepts).  Yet the 
 behavior of those regularities is not just physical law but the specific 
 configuration of the felt, the chips, the wind, and so forth, and all that 
 data makes a closed-form explanation impractical.
 
Yet, I don't get the sense that this is what you mean by a complex system.  
If it is, your contention that they are rare is certainly not correct, since 
many such examples can easily be found.  This aspect of complexity iillustrates 
the butterfly effect often used in discussions of complexity.
 
I'm not trying to be difficult; it's crucial for me to understand what you mean 
(versus my interpretation of what others have meant or my own internal 
definitions) if I am to follow your argument.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-22 Thread Mark Waser

How confident are you that this only-complex-AI limitation applies in
reality? How much would you bet on it? I'm not convinced, and I think
that if you are convinced too much, you made wrong conclusions from
your data, unless you communicated too little of what formed your
intuition.


I am completely sure that it applies (although your phrasing makes me 
wonder if you have interpreted my exact worry accurately... I will have to 
come back to that).


I am also sure that it applies but don't believe that it is a huge problem 
unless you ignore it.  Remember, gravity with three bodies is a complex 
problem -- but it is relatively easy to characterize and solve to reasonable 
limits (just don't try to make plans too far in the future without making 
periodic readings to ensure that reality still matches your model). 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-22 Thread Richard Loosemore

Derek Zahn wrote:

Richard:  I get tripped up on your definition of complexity:
 
  A system contains a certain amount of complexity in it if it

  has some regularities in its overall behavior that are governed
  by mechanisms that are so tangled that, for all practical purposes,
  we must assume that we will never be able to find a closed-form
  explanation of how the global arises from the local.

on figuring out what counts as a regularity in overall behavior. 

Consider a craps table.  The trajectories of the dice would seem to have 
global regularities (for which craps players and normal people have 
words and phrases, like bouncing off the back, flying off the table, 
or whatever).  Our ability to create concepts around this activity would 
seem to imply the existence of global regularities (finding them is what 
we do when we make concepts).  Yet the behavior of those regularities is 
not just physical law but the specific configuration of the felt, the 
chips, the wind, and so forth, and all that data makes a closed-form 
explanation impractical.
 
Yet, I don't get the sense that this is what you mean by a complex 
system.  If it is, your contention that they are rare is certainly not 
correct, since many such examples can easily be found.  This aspect of 
complexity iillustrates the butterfly effect often used in discussions 
of complexity.
 
I'm not trying to be difficult; it's crucial for me to understand what 
you mean (versus my interpretation of what others have meant or my own 
internal definitions) if I am to follow your argument.


Okay, I will respond to your questions on two fronts (!) - I just posted 
a reply to your comment on the blog, too.


In the above, you mention butterfly effects.  This is not a mainstream 
example of complexity, it is chaos, which is not exactly the same thing.


More generally, you cannot say that a system is complex by itself, it is 
a system with respect to a particular regularity in its behavior.


The solar system, for example, is not complex:  the planets move in 
wonderfully predictable orbits.


BUT... actually the solar system *is* complex, because Pluto's behavior 
is unstable, and every once in a while it comes in and messes with 
everyone else.


So if the solar system remains utterly predictable for a hundred million 
years, and then Pluto goes AWOL for a few years, what is it?  It is 
partially complex, with just a tiny degree of complexity superimposed on 
otherwise non-complex behavior.  We cannot give a black and white answer 
to the question is it complex?.


Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Derek Zahn
One more bit of ranting on this topic, to try to clarify the sort of thing I'm 
trying to understand.
 
Some dude is telling my AGI program:  There's a piece called a 'knight'.  It 
moves by going two squares in one direction and then one in a perpendicular 
direction.  And here's something neat:  Except for one other obscure case I'll 
tell you about later, it's the only piece that moves by jumping through the air 
instead of moving a square at a time on its journey.
 
When I try to think about how an intelligence works, I wonder about specific 
cases like these (and thanks to William Pearson for inventing this one) -- the 
genesis of the knight concept from this specific purely verbal exchange.  How 
could this work?  What is it about the specific word sequences and/or the 
conversational context that creates this new thing -- the Knight?  It would 
have to be a hugely complicated language processing system... so where did that 
language processing system come from?  Did somebody hardcode a model of 
language and conversation and explicitly insert generate concept here 
actions?  That sounds like a big job.  If it was learned (much better), how was 
it learned?  What is the internal representation of the language processing 
model that leads to this particular concept formation, and how was it 
generated?  If I can see something specific like that in a system (say 
Novamente) I can start to really understand the theory of mind it expresses.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Vladimir Nesov
On Mon, Apr 21, 2008 at 8:32 PM, Derek Zahn [EMAIL PROTECTED] wrote:

 One more bit of ranting on this topic, to try to clarify the sort of thing
 I'm trying to understand.

  Some dude is telling my AGI program:  There's a piece called a 'knight'.
 It moves by going two squares in one direction and then one in a
 perpendicular direction.  And here's something neat:  Except for one other
 obscure case I'll tell you about later, it's the only piece that moves by
 jumping through the air instead of moving a square at a time on its
 journey.

  When I try to think about how an intelligence works, I wonder about
 specific cases like these (and thanks to William Pearson for inventing this
 one) -- the genesis of the knight concept from this specific purely verbal
 exchange.  How could this work?  What is it about the specific word
 sequences and/or the conversational context that creates this new thing --
 the Knight?  It would have to be a hugely complicated language processing
 system... so where did that language processing system come from?  Did
 somebody hardcode a model of language and conversation and explicitly insert
 generate concept here actions?  That sounds like a big job.  If it was
 learned (much better), how was it learned?  What is the internal
 representation of the language processing model that leads to this
 particular concept formation, and how was it generated?  If I can see
 something specific like that in a system (say Novamente) I can start to
 really understand the theory of mind it expresses.


Generating concepts out of thin air is no big deal, if only a
resource-hungry process. You can create a dozen for each episode, for
example. A more challenging task is to arrange so that all these
concepts actually lead to correct decisions, influence other concepts
in a right way without destroying them. New concepts need to be
friendly to inference system before modification.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Derek Zahn
Stephen Reed writes:

Hey Texai, let's program
[Texai] I don't know how to program, can you teach me by yourself?
Sure, first thing is that a program consists of statements that each does 
something
[Texai]  I assume by program you mean a sequence of instructions that a 
computer can interpret and execute, and that by statement you mean a line of 
code written as part of a computer program
Right.  One type of instruction is the assignment statement.
[Texai] OK, how is it interpreted?
It has two parts, one part identifies a variable that receives a copy of, or a 
reference to, an evaluated expression described by the other part of the 
instruction
[Texai] I assume by variable you mean a symbol (like x or y) that is used in 
mathematical or logical expressions to represent a variable quantity. What's an 
evaluated expression?  And under what circumstances does the situation in which 
the variable receives a copy of the evaluated expression occur, as contrasted 
with the situation in which the variable receives a reference to the evaluated 
expression?
Wow, if that turns out to be an actual transcript sent back through a time 
machine (I mean, if it works like you think), that's amazingly impressive.  
Every part of it, from knowing to ask you to teach it to do something, to 
connecting 'program' used as a verb to 'program' used as a noun, to knowing all 
about sequences of instructions, what computers are and how they work, what a 
line of code even means, and so on.  I assume these things were taught to it 
through previous teaching sessions, and I'm really eager to see that in action. 
 Of particular interest to me here is the conceptual leap from equality in a 
mathematical expression (which I guess the system already knows about) to the 
very different idea of assignment in a normal programming language.  The origin 
of a variable as a named thing that can hold a value was an interesting 
concept to communicate to undergraduate business majors back in the day when I 
taught introductory programming... you could just see them get it after 
trying analogies with mailboxes and diagrams of computer memory and whatnot.  
It had never occurred to some of them to put a number in a box for later use 
before but I clearly remember the instant of concept formation occurring in 
their fresh young minds :)
 
Now the aha moment behind learning the concept of recursion is even more 
interesting...
 
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Stephen Reed
Hi Derek,

Thanks for encouragement.  Take a look at WordNet online here and you will see 
why an initial Texai goal is to fully understand the word sense definitions 
(e.g. program).

It's been so long that I cannot recall the year, or even the season, but I do 
recall to this day exactly where I was when the recursion aha moment occurred 
for me.  From the computer center, I was walking alone, traversing an empty 
quad in the twilight, back to my dorm at Stony Brook.  Of course its a typical 
youngster's attitude to believe that simple elegant principles can solve great 
challenges, but still it was thrilling - thanks for provoking its recollection.
 
Cheers,
-Steve


Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Derek Zahn [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, April 21, 2008 12:43:37 PM
Subject: RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent 
input and responses

.hmmessage P { margin:0px;padding:0px;} body.hmmessage { 
FONT-SIZE:10pt;FONT-FAMILY:Tahoma;}  Stephen Reed writes:
  Hey Texai, let's program [Texai] I don't know how to program, can you teach 
me by yourself? Sure, first thing is that a program consists of statements that 
each does something [Texai]  I assume by program you mean a sequence of 
instructions that a computer can interpret and execute, and that by statement 
you mean a line of code written as part of a computer program Right.  One type 
of instruction is the assignment statement.
 [Texai] OK, how is it interpreted? It has two parts, one part identifies a 
variable that receives a copy of, or a reference to, an evaluated expression 
described by the other part of the instruction [Texai] I assume by variable you 
mean a symbol (like x or y) that is used in mathematical or logical expressions 
to represent a variable quantity. What's an evaluated expression?  And under 
what circumstances does the situation in which the variable receives a copy of 
the evaluated expression occur, as contrasted with the situation in which the 
variable receives a reference to the evaluated expression? Wow, if that turns 
out to be an actual transcript sent back through a time machine (I mean, if it 
works like you think), that's amazingly impressive.  Every part of it, from 
knowing to ask you to teach it to do something, to connecting 'program' used as 
a verb to 'program' used as a noun, to knowing all about sequences of 
instructions, what computers are and
 how they work, what a line of code even means, and so on.  I assume these 
things were taught to it through previous teaching sessions, and I'm really 
eager to see that in action.  Of particular interest to me here is the 
conceptual leap from equality in a mathematical expression (which I guess the 
system already knows about) to the very different idea of assignment in a 
normal programming language.  The origin of a variable as a named thing that 
can hold a value was an interesting concept to communicate to undergraduate 
business majors back in the day when I taught introductory programming... you 
could just see them get it after trying analogies with mailboxes and diagrams 
of computer memory and whatnot.  It had never occurred to some of them to put a 
number in a box for later use before but I clearly remember the instant of 
concept formation occurring in their fresh young minds :)
  
 Now the aha moment behind learning the concept of recursion is even more 
interesting...
  
  
 agi | Archives   | Modify  Your Subscription   

 






  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Ed Porter
 roughly
Novamente-like machines.  In 8 to 20 years I would be surprised if we do not
see machines that are at least at human levels in virtually all mental
skills it is desirable for machines to have. 

 

 

 

-Original Message-
From: Derek Zahn [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 21, 2008 12:33 PM
To: agi@v2.listbox.com
Subject: RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent
input and responses

 

One more bit of ranting on this topic, to try to clarify the sort of thing
I'm trying to understand.
 
Some dude is telling my AGI program:  There's a piece called a 'knight'.
It moves by going two squares in one direction and then one in a
perpendicular direction.  And here's something neat:  Except for one other
obscure case I'll tell you about later, it's the only piece that moves by
jumping through the air instead of moving a square at a time on its
journey.
 
When I try to think about how an intelligence works, I wonder about specific
cases like these (and thanks to William Pearson for inventing this one) --
the genesis of the knight concept from this specific purely verbal
exchange.  How could this work?  What is it about the specific word
sequences and/or the conversational context that creates this new thing --
the Knight?  It would have to be a hugely complicated language processing
system... so where did that language processing system come from?  Did
somebody hardcode a model of language and conversation and explicitly insert
generate concept here actions?  That sounds like a big job.  If it was
learned (much better), how was it learned?  What is the internal
representation of the language processing model that leads to this
particular concept formation, and how was it generated?  If I can see
something specific like that in a system (say Novamente) I can start to
really understand the theory of mind it expresses.

 

  _  


agi |  http://www.listbox.com/member/archive/303/=now Archives
http://www.listbox.com/member/archive/rss/303/ |
http://www.listbox.com/member/?;
 Modify Your Subscription

 http://www.listbox.com 

 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Derek Zahn
Vladimir Nesov writes: Generating concepts out of thin air is no big deal, 
if only a resource-hungry process. You can create a dozen for each episode, 
for example.
 
If I am not certain of the appropriate mechanism and circumstances for 
generating one concept, it doesn't help to suggest that a dozen get generated 
instead... now I have twelve times as many things to explain.  If you are 
suggesting that concept formation is a (perhaps stochastic) generate-and-test 
procedure, that seems like an okay idea but the issues are then redescribed as: 
what is the generation procedure, what causes it to be invoked, what the test 
procedure is, and so on.
 
These questions cannot be answered outside the context of a particular system; 
they are just the things I'd like to understand exactly how they would happen 
in Novamente or Texai or whatever, with all handwaving removed.
 
To get back to the original question of this thread, these are some of the many 
missing conceptual pieces TO ME because I cannot see the specific nuts and 
bolts solution for any proposed system.  It may in fact be that for any non-toy 
example the mechanisms and data are going to be too complicated for such 
analysis... that is, my brain is too puny and ineffective to understand (in a 
clear and relatively complete way) the inner workings of a general 
intelligence.  In that case, all I can do is hope for proof by performance.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Vladimir Nesov
On Mon, Apr 21, 2008 at 11:45 PM, Derek Zahn [EMAIL PROTECTED] wrote:


  If I am not certain of the appropriate mechanism and circumstances for
 generating one concept, it doesn't help to suggest that a dozen get
 generated instead... now I have twelve times as many things to explain.  If
 you are suggesting that concept formation is a (perhaps stochastic)
 generate-and-test procedure, that seems like an okay idea but the issues are
 then redescribed as: what is the generation procedure, what causes it to be
 invoked, what the test procedure is, and so on.


I just wanted to emphasize the importance of how new concepts actively
influence the system (as opposed to being passively created). If new
concepts don't do anything, you don't need them. If they can be
observed and acted upon, they already change the way system behaves.
This does look shallow without specific framework in mind though. In
my current model, there is only a 'test', no 'generate': new concepts
are not usually created at all (only to change resource quota),
instead existing concepts are adapted, allowing themselves to be
influenced by other concepts. So, from my current point of view, it's
much more natural to look at what new concept does to existing system
than at how it originates.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Ed Porter
Zahn===

 If you are suggesting that concept formation is a (perhaps stochastic)
generate-and-test procedure, that seems like an okay idea but the issues are
then redescribed as: what is the generation procedure, what causes it to be
invoked, what the test procedure is, and so on.
 
These questions cannot be answered outside the context of a particular
system; they are just the things I'd like to understand exactly how they
would happen in Novamente or Texai or whatever, with all handwaving removed.
 
To get back to the original question of this thread, these are some of the
many missing conceptual pieces TO ME because I cannot see the specific nuts
and bolts solution for any proposed system.  It may in fact be that for any
non-toy example the mechanisms and data are going to be too complicated for
such analysis... that is, my brain is too puny and ineffective to understand
(in a clear and relatively complete way) the inner workings of a general
intelligence.  In that case, all I can do is hope for proof by performance.

 

Porter===

With regard to the generate part of generate and test --- there are multiple
ways to generate patterns and concepts.  I think a lot can be achieved just
by recording significant parts of  the hierarchical memory activation states
(such as the most attended parts of the most attended state). And then
generalizing over such states, as described in my recent posts regarding
hierarchical memory.  But many feel this relatively direct recording of
experience at multiple generalizational and compositional levels is not good
enough.  Novamente uses an evolutionary learning process in addition to its
more standard record and generalize type of learning.  Wlodzislaw Duch at
AGI 2008 told me that one of the current theories of cortical columns in the
brain is that they receive input patterns and they project them into a much
higher dimensional space (i.e., making from a given input pattern many
output patterns) , making available for learning a larger number of
representations of patterns, some of which my be more helpful for finding
the valuable commonalities and valuable distinctions between patterns.  This
is somewhat like the way in which the kernel trick, in effect, project data
from a lower dimensional space into a higher dimensional space so support
vectors can better distinguish between classes.  

 

With regard to test part of generate and test --- reinforcement learning,
has proved to be a very powerful form or machine learning.  It provides a
good model for how a network representation can properly allocate scores
reflecting the values their various states and transition links have played
in obtaining some desired reward, by distributing value back from the
rewarded state to the transitions and states through which it was reached in
a particular experience.  A similar type of projecting of value back from
rewarded states into patterns that prove useful in achieving that state
could be used in a Novamente type system.

 

Your statement about the brain being too puny to totally understand a
powerful AGI system is true for all humans.  This is analogous to the fact
that one of the major things speeding brain science today is computer
simulation, which is allowing simulated neural network circuits to indicate
how various hypothesized circuits in the brain would work at a level of
model complexity in which human minds would it find virtually impossible to
make accurate predictions.

 

If you spend enough time reading about AI and AGI architectures and brain
science you should be able to develop a feeling for how an you might expect
one or more AGI systems to work.  But it is impossible to actually imagine
the full complexity of a roughly human level AGI.  We can have feelings that
certain types of architectures should behave in certain ways, some based on
evidence of similar systems, and some based on intuition and partial
simulations in our own mind.  But for complex things --- like the meaning of
all the patterns in a human level, automatically created memory hierarchy
--- , or like the most efficient behaviors and parameters for tuning
spreading activation and implication --- I think at this stage we will have
to build such systems to learn how to do this well.  Any simulation of them
is way beyond the human mind --- and would probably be as complex to do by
computer simulation as by the building of a real AGI computer system itself.

 

 

 

-Original Message-
From: Derek Zahn [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 21, 2008 3:46 PM
To: agi@v2.listbox.com
Subject: RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent
input and responses

 

Vladimir Nesov writes:

 Generating concepts out of thin air is no big deal, if only a
 resource-hungry process. You can create a dozen for each episode, for
 example.
 
If I am not certain of the appropriate mechanism and circumstances for
generating one concept, it doesn't help to suggest

Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread William Pearson
On 21/04/2008, Ed Porter [EMAIL PROTECTED] wrote:
  So when people are given a sentence such as the one you quoted about verbs,
  pronouns, and nouns, presuming they have some knowledge of most of the words
  in the sentence, they will understand the concept that verbs are doing
  words.   This is because of the groupings of words that tend to occur in
  certain syntactical linguistic contexts, the ones that would be most
  associated with the types of experiences the mind would associates with
  doing would be largely word senses that are verbs and that the mind's
  experience and learned patterns most often proceeds by nouns or pronouns.
  So all this stuff falls out of the magic of spreading activation in a
  Novamente-like hierarchical experiential memories (with the help of a
  considerable control structure such as that envisioned for Novamente).

  Declarative information learned by NL gets projected into the same type of
  activations in the hierarchical memory

How does this happen?  What happens when you try and project, This
sentence is false. into the activations of the hierarchical memory?
And consider that the whole of the english understanding is likely to
be in the hierarchical memory. That is the projection must be learnt.

 as would actual experiences that
  teaches the same thing, but at least as episodes, and in some patterns
  generalized from episodes, such declarative information would remain linked
  to the experience of having been learned from reading or hearing from other
  humans.
  So in summary, a Novamete-like system should be able to handle this alleged
  problem, and at the moment it does not appear to provide an major unanswered
  conceptual problem. 

My conversation with Ben about similar subject (words acting on the
knowledge of words) didn't get anywhere.

The conversation starting here -
http://www.mail-archive.com/agi@v2.listbox.com/msg09485.html

And I consider him the authority on Novamente-like systems, for now at least.

Will

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Richard Loosemore

Ed Porter wrote:
Richard, 


There is no evidence you are more justified in laughing at my position than
I am in saying your complexity issues do not appear to represent a major
unsolved conceptual issues.

Remember I am not denying complexity issues don't exist.  Instead I am
saying it is not clear they provide a major conceptual problem.  There are
many tools for controlling the dynamic complexity in a Novamente-like
system.  How long it will take to tune and refine them is an issue.  In
WebMind Ben said parameter tuning turned out to be a substantial problem,
particularly because of the slowness of the system on which they were
exploring the parameter space.  So I think it will present an engineering
challenge and require more thought to provide the dynamic control required
for things like efficient inferencing.  But it is not clear such control
issues will present a major conceptual problem.  


And remember I have admitted you might, in fact, be correct, and that
complexity may turn out to present a major conceptual problem --- although I
doubt it. 


So you are treating my viewpoint with much less respect than you are
treating mine and it is far from clear your greater certitude is at all
justified by the evidence. Merely referring to Morton-Thiokol in no way
proves that you like them were right when others were wrong.


Ed,

Everything you have said in this and the last few posts about the 
'complex systems issue' has nothing whatsoever to do with the complex 
systems problem that I have described -  there is simply no connection 
between the CSP and your reflected-back version of it.


That is not lack of respect for your viewpoint, it is a simple statement 
of fact.


In my anecdote about Morton Thiolokol, your contribution would be one of 
the people who present a complete misunderstanding of what I was 
talking about.


I do not laugh at your misunderstanding, I laugh at the general 
complacency;  the attitude that a problem denied is a problem solved.  I 
laugh at the tragicomedic waste of effort.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Derek Zahn
Richard Loosemore: I do not laugh at your misunderstanding, I laugh at the 
general  complacency; the attitude that a problem denied is a problem solved. 
I  laugh at the tragicomedic waste of effort.
I'm not sure I have ever seen anybody successfully rephrase your complexity 
argument back at you; since nobody understands what you mean it's not 
surprising that people are complacent about it.
 
I was going to wait for some more blog posts to have a go at rephrasing it 
myself but my (probably wrong effort) would go like this:
 
1. Many things we want to build have desired properties that are described at a 
different level than the things we build them out of.  Flying is emergent in 
this sense from rivets and sheet metal, for example.  Thinking is emergent from 
neurons, for another example.
2. Some such things are complex in that the emergent properties cannot be 
predicted from the lower-level details.
3. Flying as above is not complex in this way.  In fact, all of engineering 
is the study of how to build things that are increasingly complicated but NOT 
complex.  We do not want airplanes to have complex behavior and the engineering 
methodology is expressly for squeezing complexity out.
4. Thinking must be complex.  [my understanding of why this must be true is 
lacking.  Something like: otherwise we'd be able to predict the behavior of an 
AGI which would make it useless?]
5. Therefore we have no methods for building thinking machines, since 
engineering discipline does not address how to build complex devices.  Building 
them as if they are not complex will result in poor behavior; squeezing out the 
complexity will squeeze out the thinking, and leaving it in makes traditional 
engineering impossible.
 
Not quite right I suppose, but I'll keep working at it.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Vladimir Nesov
On Tue, Apr 22, 2008 at 2:07 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

  I do not laugh at your misunderstanding, I laugh at the general
 complacency;  the attitude that a problem denied is a problem solved.  I
 laugh at the tragicomedic waste of effort.


How confident are you that this only-complex-AI limitation applies in
reality? How much would you bet on it? I'm not convinced, and I think
that if you are convinced too much, you made wrong conclusions from
your data, unless you communicated too little of what formed your
intuition.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Vladimir Nesov
On Tue, Apr 22, 2008 at 2:28 AM, Derek Zahn [EMAIL PROTECTED] wrote:

  I'm not sure I have ever seen anybody successfully rephrase your complexity
 argument back at you; since nobody understands what you mean it's not
 surprising that people are complacent about it.


Derek,

I'll not paraphrase the argument itself, but the conclusion. Thinking
can't be designed from ground up, little success after success, module
after module, elaboration and generalization. Instead, it can only be
build as an opaque mess, and in a clean laboratory it's not possible
for us miserable apes to invent it. But we have a working prototype,
brains, so by limiting the design by properties we know from studying
cognitive science, it's possible to leave few enough possibilities to
enumerate by (more or less) blind search. That is what Richard's
framework is supposed to do: you feed in the restrictions, and it
automatically tests a whole set of designs limited by such
restrictions. As a result, you experiment with restrictions and not
with individual designs. Within each restriction set, there are
designs that behave very differently, but framework allows you to sort
out the weed and luckily find some gemstones.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Ed Porter

William,

Re the Epimenides paradox, Eliezer Yudkowsky had some interesting comments
in Levels of Organization in General Intelligence, Section 2.7.1 From
Thoughts to deliberation.  Which I quote below

-In the universe of bad TV shows, speaking the Epimenides Paradox1 This
sentence is false to an artificial mind causes that mind to scream in
horror and collapse into a heap of smoldering parts.  This is based on a
stereotype of thought processes that cannot divert, cannot halt, and possess
no bottom-up ability to notice regularities across an extended thought
sequence.  Given how deliberation emerges from the thought level, it is
possible to imagine a sufficiently sophisticated, sufficiently reflective AI
that could naturally surmount the Epimenides Paradox.  Encountering the
paradox This sentence is false would probably indeed lead to a looping
thought sequence at first, but this would not cause the AI to become
permanently stuck; it would instead lead to categorization across repeated
thoughts (like a human noticing the paradox after a few cycles), which
categorization would then become salient and could be pondered in its own
right by other sequiturs.  If the AI is sufficiently competent at deductive
reasoning and introspective generalization, it could generalize across the
specific instances of If the statement is true, it must be false and If
the statement is false, it must be true as two general classes of thoughts
produced by the paradox, and show that reasoning from a thought of one class
leads to a thought of the other class; if so the AI could deduce - not just
inductively notice, but deductively confirm - that the thought process is an
eternal loop.  Of course, we won't know whether it really works this way
until we try it. 
-The use of a blackboard sequitur model is not automatically sufficient for
deep reflectivity; an AI that possessed a limited repertoire of sequiturs,
no reflectivity, no ability to employ reflective categorization, and no
ability to notice when a train of thought hasn't yielded anything useful for
a while, might still loop eternally through the paradox as the emergent but
useless product of the sequitur repertoire.  Transcending the Epimenides
Paradox requires the ability to perform inductive generalization and
deductive reasoning on introspective experiences.  But it also requires
bottom-up organization in deliberation, so that a spontaneous introspective
generalization can capture the focus of attention.  Deliberation must emerge
from thoughts, not just use thoughts to implement rigid algorithms.

-Original Message-
From: William Pearson [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 21, 2008 5:42 PM
To: agi@v2.listbox.com
Subject: Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent
input and responses

On 21/04/2008, Ed Porter [EMAIL PROTECTED] wrote:
  So when people are given a sentence such as the one you quoted about
verbs,
  pronouns, and nouns, presuming they have some knowledge of most of the
words
  in the sentence, they will understand the concept that verbs are doing
  words.   This is because of the groupings of words that tend to occur in
  certain syntactical linguistic contexts, the ones that would be most
  associated with the types of experiences the mind would associates with
  doing would be largely word senses that are verbs and that the mind's
  experience and learned patterns most often proceeds by nouns or pronouns.
  So all this stuff falls out of the magic of spreading activation in a
  Novamente-like hierarchical experiential memories (with the help of a
  considerable control structure such as that envisioned for Novamente).

  Declarative information learned by NL gets projected into the same type
of
  activations in the hierarchical memory

How does this happen?  What happens when you try and project, This
sentence is false. into the activations of the hierarchical memory?
And consider that the whole of the english understanding is likely to
be in the hierarchical memory. That is the projection must be learnt.

 as would actual experiences that
  teaches the same thing, but at least as episodes, and in some patterns
  generalized from episodes, such declarative information would remain
linked
  to the experience of having been learned from reading or hearing from
other
  humans.
  So in summary, a Novamete-like system should be able to handle this
alleged
  problem, and at the moment it does not appear to provide an major
unanswered
  conceptual problem. 

My conversation with Ben about similar subject (words acting on the
knowledge of words) didn't get anywhere.

The conversation starting here -
http://www.mail-archive.com/agi@v2.listbox.com/msg09485.html

And I consider him the authority on Novamente-like systems, for now at
least.

Will

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your

RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Ed Porter
Richard,  

I read you Complex Systems, Artificial Intelligence and Theoretical
Psychology article, and I still don't know what your are talking about
other than the game of life.  I know you make a distinction between Richard
and non-Richard complexity.  I understand computational irreducibility.  And
I understand that how complex a program is, in terms of its number of lines
is not directly related to how varied and unpredictable its output will be.

I would appreciate it, Richard, if you could explain what you mean by 
Richard complexity vs. non-Richard complexity. 

Ed Porter

-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 21, 2008 6:08 PM
To: agi@v2.listbox.com
Subject: Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent
input and responses

Ed Porter wrote:
 Richard, 
 
 There is no evidence you are more justified in laughing at my position
than
 I am in saying your complexity issues do not appear to represent a major
 unsolved conceptual issues.
 
 Remember I am not denying complexity issues don't exist.  Instead I am
 saying it is not clear they provide a major conceptual problem.  There are
 many tools for controlling the dynamic complexity in a Novamente-like
 system.  How long it will take to tune and refine them is an issue.  In
 WebMind Ben said parameter tuning turned out to be a substantial problem,
 particularly because of the slowness of the system on which they were
 exploring the parameter space.  So I think it will present an engineering
 challenge and require more thought to provide the dynamic control required
 for things like efficient inferencing.  But it is not clear such control
 issues will present a major conceptual problem.  
 
 And remember I have admitted you might, in fact, be correct, and that
 complexity may turn out to present a major conceptual problem --- although
I
 doubt it. 
 
 So you are treating my viewpoint with much less respect than you are
 treating mine and it is far from clear your greater certitude is at all
 justified by the evidence. Merely referring to Morton-Thiokol in no way
 proves that you like them were right when others were wrong.

Ed,

Everything you have said in this and the last few posts about the 
'complex systems issue' has nothing whatsoever to do with the complex 
systems problem that I have described -  there is simply no connection 
between the CSP and your reflected-back version of it.

That is not lack of respect for your viewpoint, it is a simple statement 
of fact.

In my anecdote about Morton Thiolokol, your contribution would be one of 
the people who present a complete misunderstanding of what I was 
talking about.

I do not laugh at your misunderstanding, I laugh at the general 
complacency;  the attitude that a problem denied is a problem solved.  I 
laugh at the tragicomedic waste of effort.



Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Richard Loosemore

Vladimir Nesov wrote:

On Tue, Apr 22, 2008 at 2:07 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

 I do not laugh at your misunderstanding, I laugh at the general
complacency;  the attitude that a problem denied is a problem solved.  I
laugh at the tragicomedic waste of effort.



How confident are you that this only-complex-AI limitation applies in
reality? How much would you bet on it? I'm not convinced, and I think
that if you are convinced too much, you made wrong conclusions from
your data, unless you communicated too little of what formed your
intuition.



I am completely sure that it applies (although your phrasing makes me 
wonder if you have interpreted my exact worry accurately... I will have 
to come back to that).


I am confident becasue of this.  I have been trying to understand the 
relationship between theoretical models of thought (both natural and 
artificial) since at least 1980, and one thing I have noticed is that 
people devise theoretical structures that are based on the assumption 
that intelligence is not complex  but then they use these structures 
in such a way that the resulting system is almost always complex.




Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Richard Loosemore

Ed Porter wrote:
Richard,  


I read you Complex Systems, Artificial Intelligence and Theoretical
Psychology article, and I still don't know what your are talking about
other than the game of life.  I know you make a distinction between Richard
and non-Richard complexity.  I understand computational irreducibility.  And
I understand that how complex a program is, in terms of its number of lines
is not directly related to how varied and unpredictable its output will be.

I would appreciate it, Richard, if you could explain what you mean by 
Richard complexity vs. non-Richard complexity. 


[?]

Maybe you should get to me offlist about this.  I don't quite know that 
means.


Did you read the blog post on this topic?  It was supposed to be more 
accessible than the paper.


Blog is at susaro.com



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread Richard Loosemore

Vladimir Nesov wrote:

On Tue, Apr 22, 2008 at 2:28 AM, Derek Zahn [EMAIL PROTECTED] wrote:

 I'm not sure I have ever seen anybody successfully rephrase your complexity
argument back at you; since nobody understands what you mean it's not
surprising that people are complacent about it.



Derek,

I'll not paraphrase the argument itself, but the conclusion. Thinking
can't be designed from ground up, little success after success, module
after module, elaboration and generalization. Instead, it can only be
build as an opaque mess, and in a clean laboratory it's not possible
for us miserable apes to invent it. But we have a working prototype,
brains, so by limiting the design by properties we know from studying
cognitive science, it's possible to leave few enough possibilities to
enumerate by (more or less) blind search. That is what Richard's
framework is supposed to do: you feed in the restrictions, and it
automatically tests a whole set of designs limited by such
restrictions. As a result, you experiment with restrictions and not
with individual designs. Within each restriction set, there are
designs that behave very differently, but framework allows you to sort
out the weed and luckily find some gemstones.




H I detect a parody..?

That is not what I intended to say.




Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-21 Thread J. Andrew Rogers


On Apr 21, 2008, at 6:53 PM, Richard Loosemore wrote:
I have been trying to understand the relationship between  
theoretical models of thought (both natural and artificial) since at  
least 1980, and one thing I have noticed is that people devise  
theoretical structures that are based on the assumption that  
intelligence is not complex  but then they use these structures  
in such a way that the resulting system is almost always complex.



This is easily explained by the obvious fact that the definition of  
complex varies considerably across relevant populations, exacerbated  
in the case of AGI -- where it is arguably a germane element --  
because many (most?) researchers are using complex in a colloquial  
(read: meaningless) sense rather than one of its more rigorously  
defined senses, of which there are a few interesting ones.


Most arguments and disagreements over complexity are fundamentally  
about the strict definition of the term, or the complete absence  
thereof.  The arguments tend to evaporate if everyone is forced to  
unambiguously define such terms, but where is the fun in that.


J. Andrew Rogers

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com