Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-22 Thread J Storrs Hall, PhD
Thank you!  This feeds back into the feedback discussion, in a way, at a high 
level. There's a significant difference between research programming and 
production programming. The production programmer is building something which 
if (nominally) understood and planned ahead of time. The researcher is 
putting together something new to see if it works. All the knowledge flow 
goes from production programmer to the system. The important element of 
knowledge is supposed to flow from the system to the researcher.

This is important because AGIers are researchers (if we have any sense). We 
have a lot to learn about generally intelligent systems. But even more to the 
point is the fact that our systems themselves must be research programmers. 
To learn about a new thing, they must program themselves to be able to 
recognize, predict, and/or imitate it. So it's worth our time to watch 
ourselves programming because that's one thing our systems will have to do 
too.

As for the theory, I said I think there is one, not that I necessarily know 
what it is :-) However, you can begin with the observation that if your 
architecture is a network of sigmas, it's clearly necessary to provide the 
full context and sensory information to each sigma for it to record the 
appropriate trajectory in its local memory.

(Anyone interested: sigmas are explained in somewhat more detail in Ch. 13 of 
Beyond AI)

On Monday 21 April 2008 09:47:53 pm, Derek Zahn wrote:
 Josh writes: You see, I happen to think that there *is* a consistent, 
general, overall  theory of the function of feedback throughout the 
architecture. And I think  that once it's understood and widely applied, a 
lot of the architectures  (repeat: a *lot* of the architectures) we have 
floating around here will  suddenly start working a lot better.
 Want to share this theory? :)
  
 Oh, by the way, of the ones I read so far, I thought your Variac paper was 
the most interesting one from AGI-08.  I'm particularly interested to hear 
more about sigmas and your thoughts on  transparent, composable, and robust 
programming languages.  I used to think about some slightly related topics 
and thought more in terms of evolvability and plasticity (and did not 
consider opaqueness at all) but I think your approach to thinking about 
things is quite exciting.
  
  
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: 
http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-22 Thread Ed Porter
Vlad, 

Re your comment below, I would argue rapid intuitive decision making is
fundamental, because that often largely subconscious ability to quickly
decide between which of multiple alternatives to focus attention on to
include in your behavior is an essential component to much of human thought
an behavior.

Ed Porter

-Original Message-
From: Vladimir Nesov [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 22, 2008 1:04 AM
To: agi@v2.listbox.com
Subject: Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

On Tue, Apr 22, 2008 at 5:20 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Vlad,

  It is my belief that humans can do intuitive cost/benefit analysis
without
  deliberation, although many forms of cost/benefit analysis do require
  deliberation.

  For example a basketball player often looks around him in a one or two
  seconds makes a decision who to throw to, whether to shoot, or whether to
  make a move with the ball, based on an intuitive cost/benefit analysis.
My
  model of the brain is one of massive parallelism, in which many
multi-level
  patterns are being matched at one time.  Thus when a basket ball player
  scans around him the various things he sees might activate patterns to
  various degress that involve both patterns of success, patterns of
failure
  and risk associated with various patterns for behaviors, and patterns for
  various behaviors could receive varying scores, and the equivalent to the
  basil ganglia could select the pattern with the best score for increasing
  attention and finally action commitment.

  All this type of intuitive decistion making could be made without
anything
  approaching what we normally think of as deliberation.


Agreed, but still I wouldn't call such process fundamental.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Richard Loosemore

Derek Zahn wrote:

Richard Loosemore:

  I'll try to tidy this up and put it on the blog tomorrow.
 
I'd like to pursue the discussion and will do so in that venue after 
your post.
 
I do think it is a very interesting issue.  Truthfully I'm more 
interested in your specific program for how to succeed than this 
argument about why everybody else will fail, but I understand that they 
are linked.


I understand your eagerness for more positive info.  The main reason, 
though, that I stress this backround reasoning is that in my experience 
people tend to misunderstand the positive proposal unless they 
understand exactly how the background arguments serve to motivate it.


More later.



Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Random Thoughts on Thinking...

2008-04-22 Thread A. T. Murray
Steve Richfield wrote:

 The process that we call thinking is VERY 
 different in various people. [...]
[...]
 Any thoughts?

 Steve Richfield

The post above -- real food for thought -- was the most 
interesting post that I have ever read on the AGI list.

Arthur T. Murray
--
http://mentifex.virtualentity.com/Mind.html 
http://mentifex.virtualentity.com/userman.html 
http://mind.sourceforge.net/mind4th.html 
http://mind.sourceforge.net/m4thuser.html 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-22 Thread Richard Loosemore

Vladimir Nesov wrote:

On Tue, Apr 22, 2008 at 5:59 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

 H I detect a parody..?

 That is not what I intended to say.



No, as horrible as it may sound, this is how I see the problem that
you are trying to address. If you can pinpoint some specific errors in
my description, without reiterating the whole description once again,
that would probably be helpful.



On a second reading, the description of my propsoed paradigm is not that 
inaccurate, it just emphasizes some things and de-emphasizes others, 
thereby making the whole thing look weird.


I'll elaborate later.



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-22 Thread Derek Zahn
J Andrew Rogers writes: Most arguments and disagreements over complexity are 
fundamentally  about the strict definition of the term, or the complete 
absence  thereof. The arguments tend to evaporate if everyone is forced to  
unambiguously define such terms, but where is the fun in that.
I agree with this to a point at least.  My attempt to rephrase Richard's 
argument falters because I have not yet understood his use of the term 
'complexity'.  I'd prefer a rigorous definition but will settle for a better 
general understanding of what he means.  Despite his several attempts to 
describe his meaning I have not been able yet to successfully grasp exactly 
what counts as complex and what does not, and for things inbetween, how to 
judge the degree of complexity.
 
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-22 Thread Richard Loosemore

J. Andrew Rogers wrote:


On Apr 21, 2008, at 6:53 PM, Richard Loosemore wrote:
I have been trying to understand the relationship between theoretical 
models of thought (both natural and artificial) since at least 1980, 
and one thing I have noticed is that people devise theoretical 
structures that are based on the assumption that intelligence is not 
complex  but then they use these structures in such a way that the 
resulting system is almost always complex.



This is easily explained by the obvious fact that the definition of 
complex varies considerably across relevant populations, exacerbated 
in the case of AGI -- where it is arguably a germane element -- because 
many (most?) researchers are using complex in a colloquial (read: 
meaningless) sense rather than one of its more rigorously defined 
senses, of which there are a few interesting ones.


Most arguments and disagreements over complexity are fundamentally 
about the strict definition of the term, or the complete absence 
thereof.  The arguments tend to evaporate if everyone is forced to 
unambiguously define such terms, but where is the fun in that.


It is correct to say that there is disagreement about what complexity 
means, but that is why I went to so much trouble to give a precise 
definition of it, and the use that precise definition consistently.


Last thing I want to do is to engage in fruitless debates with other 
complex systems people about what exactly it means.


But then, going back to your first comment above, no, you cannot use 
other people's confusion about the meaning of the term complexity to 
explain why models of thinking start off being designed as if they were 
not complex, but then get used in ways that makes the overall system 
complex.  That observation is pretty much independent of the definition 
you choose, and any way it happens within my definition, so it still 
needs to be explained.


The explanation, of course, is that intelligent systems really are 
(partially) complex, but everyone is trying to kid themselves that they 
are not, to make their research easier.



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-22 Thread Mark Waser

how I presume a Novamente system would work


I think that we all need to be more careful about our 
presumptions/assumptions.  I think that many important comceptual pieces are 
glossed over and lost this way.


Novamente currently has absolutely no sign of and/or detailed plans for 
*numerous* critical conceptual pieces.  They have an awesome low-level 
discovery architecture but currently have nothing that demonstarates the 
ability to modularize, scale, or even analogize.  I fully expect someone 
else to leapfrog Novamente which seems to be perpetually stuck at the lowest 
levels.


  But until we actually try building systems like Novamenti or larger 
versions of Joscha Bach's MicroPsi architecture we won't know for sure 
exactly how complex getting the bottom-up, top-down, and lateral 
implications and constraints to all work together well will be.


Thank you.  My point precisely.

I'm hoping and expecting it will just be a quite complicated AI 
engineering task, made much easier by cheap hardware which will make 
search the space of possible solutions much cheaper and faster --- but it 
might become a full blown major conceptual piece.


I'm not so sanguine.  I fully expect that this will be several major 
conceptual pieces that we don't even know how to START on yet.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-22 Thread Derek Zahn
Richard:  I get tripped up on your definition of complexity:
 
 A system contains a certain amount of complexity in it if it 
 has some regularities in its overall behavior that are governed 
 by mechanisms that are so tangled that, for all practical purposes, 
 we must assume that we will never be able to find a closed-form 
 explanation of how the global arises from the local.on figuring out what 
 counts as a regularity in overall behavior.  Consider a craps table.  The 
 trajectories of the dice would seem to have global regularities (for which 
 craps players and normal people have words and phrases, like bouncing off 
 the back, flying off the table, or whatever).  Our ability to create 
 concepts around this activity would seem to imply the existence of global 
 regularities (finding them is what we do when we make concepts).  Yet the 
 behavior of those regularities is not just physical law but the specific 
 configuration of the felt, the chips, the wind, and so forth, and all that 
 data makes a closed-form explanation impractical.
 
Yet, I don't get the sense that this is what you mean by a complex system.  
If it is, your contention that they are rare is certainly not correct, since 
many such examples can easily be found.  This aspect of complexity iillustrates 
the butterfly effect often used in discussions of complexity.
 
I'm not trying to be difficult; it's crucial for me to understand what you mean 
(versus my interpretation of what others have meant or my own internal 
definitions) if I am to follow your argument.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Mark Waser
I'm not sure I have ever seen anybody successfully rephrase your 
complexity argument back at you; since nobody understands what you mean 
it's not surprising that people are complacent about it.


Bit of an overgeneralization, methinks:  this list is disproportionately 
populated with people who satisfy the conjunctive property [do not 
understand it] and [do like to chat about AGI].  That is no criticism, but 
it makes it look like nobody understands it.


I understand what Richard means by his complexity argument and see his point 
though I believe that it can be worked around if you're aware of it -- the 
major problem being, as Richard points out, most AGI systems developers 
don't see it as necessary to work around.


As I have said before, I do get people contacting me offlist (and 
off-blog, now) who do understand it, but simply do not feel the need to 
engage in list-chat.


. . . . because many people on this list are more invested in being right 
then being educated.  I think that this argument is a lost cause on this 
list and generally choose not to wast time on lost causes -- but I'm in an 
odd mood, so . . . .


If you just randomly slap together systems that have those kinds of 
mechanisms, there is a tendency for complex, emergent properties to be 
seen in the system as a whole.  Never mind trying to make the system 
intelligent, you can make emergent properties appear by generating random, 
garbage-like relationships between the elements of a system.


   Emergent is a bad word.  People do not understand it.  They think that 
emergent normally means complex, wonderful, and necessarily correct.  They 
are totally incorrect.


But now here is the interesting thing:  this observation (about getting 
complexity/emergence out if you set the system up with ugly, tangled 
mechanisms) is consistent with the reverse observation:  in nature, the 
science we have studied so far in the last three hundred years has been 
based on simple mechanisms that (almost always) does not involve ugly, 
tangled mechanisms.


   Nature likes simple.  Simple producing complex effects is what nature is 
all about.  Complex producing simple effects is human studpidity and prone 
to dramatic failure.


   Richard tends not to make the point but the most flagrant example of his 
complexity problem is Ben Goertzel's stories about trying to tune the 
numerous parameters for his various AI systems.  I think that Richard is 
entirely in the right here but have been unsuccessful in repeated attempts 
to convince Ben of this.  Yes, you *do* need tunable parameters in an AI 
system -- but they should not be set up in such a way that they can 
oscillate to chaotic failure.


To cut a long story short, it turns out that the Inference Control Engine 
is more important than the inference mechanism itself.


   Many people agree with this, but . . .

The actual behavior of the system is governed, not by the principles of 
perfectly reliable logic, but by a messy, arbitrary inference control 
engine, and the mechanisms that drive the latter are messy and tangled.


   This is where Richard and I part ways.  I think that inference is 
currently messy and arbitrary and tangled because we don't understand it 
well enough.  This may be a great answer to Ed Porter's question of what is 
conceptually missing from current AGI attempts.  I think that inference 
control will turn out to be relatively simple in design as well -- yet 
possess tremendously complex effects, just like everything else in nature.


Now, wherever you go in AI, I can tell the same story.  A story in which 
the idealistic AI researchers start out wanting to build a thinking system 
in which there is not supposed to be any arbitrary mechanism that might 
give rise to complexity, but where, after a while, some ugly mechanisms 
start to creep in, until finally the whole thing is actually determined by 
complexity-inducing mechanisms.


   Actually, this is not just a complexity argument.  It's really an 
argument about how many AGI researchers want to start tabula rasa -- but 
then find that you can't do everything at once.  Some researchers then start 
throwing in assumptions and quick fixes until those things dominate the 
system while others are smart enough to just reduce the system size and 
scope.


5. Therefore we have no methods for building thinking machines, since 
engineering discipline does not address how to build complex devices. 
Building them as if they are not complex will result in poor behavior; 
squeezing out the complexity will squeeze out the thinking, and leaving 
it in makes traditional engineering impossible.


Not a bad summary, but a little oddly worded.


   Huh?  Why doesn't engineering discipline address building complex 
devices?  Engineering discipline can address everything (just like science) 
as long as you're willing to open up your eyes and address reality. 
Richard's arguments are only cogent if an AI researcher is trying to ignore 

Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-22 Thread Mark Waser

How confident are you that this only-complex-AI limitation applies in
reality? How much would you bet on it? I'm not convinced, and I think
that if you are convinced too much, you made wrong conclusions from
your data, unless you communicated too little of what formed your
intuition.


I am completely sure that it applies (although your phrasing makes me 
wonder if you have interpreted my exact worry accurately... I will have to 
come back to that).


I am also sure that it applies but don't believe that it is a huge problem 
unless you ignore it.  Remember, gravity with three bodies is a complex 
problem -- but it is relatively easy to characterize and solve to reasonable 
limits (just don't try to make plans too far in the future without making 
periodic readings to ensure that reality still matches your model). 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Random Thoughts on Thinking...

2008-04-22 Thread Mark Waser
 Any thoughts?

My first thought is that you put way too much in a single post . . . . 

 The process that we call thinking is VERY different in various people.

Or even markedly different from one occasion to the next in the same person.  I 
am subject to a *very*strong Seasonal Affective Disorder effect (call it 
seasonal-cycle manic-depression though not quite that extreme).  After many 
years, I recognize that I think *entirely* differently in the summer as opposed 
to the middle of winter.

 Once they adopted an erroneous model and stored some information based on 
 it, they were stuck with it and its failures for the remainder of their 
 lives. 

While true in many (and possibly the majority of cases), this is nowhere near 
universally true.  This is like saying that you can't unlearn old, bad habits.

 Superstitious learning is absolutely and theoretically unavoidable.

No.  You are conflating multiple things here.  Yes, we always start learning by 
combination -- but then we use science to weed things out.  The problem is -- 
most people aren't good scientists or cleaners.

 Certainly, no one has suggested ANY reason to believe that the great 
 ultimate AGI of the long distant future will be immune to it.

I believe that, with the ability to have it's beliefs transparent and open to 
inspection by itself and others, the great ultimate AGI of the near future will 
be able to perform scientific clean-up *much* better than you can possibly 
imagine.

Mark

  - Original Message - 
  From: Steve Richfield 
  To: agi@v2.listbox.com 
  Sent: Monday, April 21, 2008 11:54 PM
  Subject: [agi] Random Thoughts on Thinking...


  The process that we call thinking is VERY different in various people. In 
my own case, I was mercury poisoned (which truncates neural tubes) as a baby, 
was fed a low/no fat diet (which impairs myelin growth), and then at the age of 
5, I had my metabolism trashed by general anesthesia (causing brain fog). I 
have since corrected my metabolic problems, I now eat LOTS of fat, and I 
flushed the mercury out of my system.

  However, the result of all of this was dramatic - I tested beyond genius in 
some ways (first tested at the age of 6), and below average in others. I could 
solve complex puzzles at lightning speed, but had the memory of an early 
Alzheimer's patient. However, one thing was quite clear - whatever it was that 
went on behind my eyeballs was VERY different from other people. No, I don't 
mean better or worse than others, but completely different. My horrible 
memory FORCED me to resort to understanding many things that other people 
simply remembered, as at least for me, those understandings lasted a lifetime, 
while my memory would probably be gone before the sun went down. This pushed me 
into a complex variable-model version of reality, from which I could see that 
nearly everyone operated from fixed models. Once they adopted an erroneous 
model and stored some information based on it, they were stuck with it and 
its failures for the remainder of their lives. This apparently underlies most 
religious belief, as children explain the unknown in terms of God, and are then 
stuck with this long after they realize that neither God nor Santa Clause can 
exist as conscious entities.

  Superstitious learning is absolutely and theoretically unavoidable. 
Certainly, no one has suggested ANY reason to believe that the great ultimate 
AGI of the long distant future will be immune to it. Add some trusted 
misinformation (that we all get) and you have the makings of a system that is 
little better than us, other than it will have vastly superior abilities to 
gain superstitious learning and spout well-supported but erroneous conclusions 
based on it.

  My efforts on Dr. Eliza was to create a system that was orthogonal to our 
biologically-based problem solving abilities. No, it usually did NOT solve 
problems in the traditional way of telling the user what is broken (except in 
some simplistic cases where this was indeed possible), but rather it focused on 
just what it was that the user apparently did NOT know to have such a problem. 
Inform the user of whatever it is that they did not know, and their problem 
will evaporate through obviation - something subtly different than being 
solved. Of course, some of that knowledge will be wrong, but hopefully 
users have the good sense to skip over Steve's snake oil will cure all 
illnesses and consider other facts.

  One job I had was as the in-house computer and numerical analysis consultant 
for the Physics and Astronomy departments of a major university. There it 
gradually soaked in that the symbol manipulation of Algebra and higher 
mathematics itself made some subtle mis-assumptions that often led people 
astray. For example, if you have a value with some uncertainty (as all values 
do) through a function with a discontinuity (as many interesting functions 
have); when the range of uncertainty includes the 

FW: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? ---re Loosemore's complexity argument

2008-04-22 Thread Ed Porter
I am re-posting this because I first sent it out an hour ago and it is not
yet showing on my email

-Original Message-
RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? ---re Loosemore's
complexity argument

Richard,  

I read the article in your blog (http://susaro.com/) cited below entitled
The Complex Systems Problem (Part 1).  

I think it contains some important food for thought --- but it is so one
sided as to reduce its own credibility.

You don't mention that there are many relatively stable Richard-complex
systems that have proven themselves to function in a relatively reliable ---
although not always desirable --- way, over thousands of years.  

Take market economies.  They have shown surprising stability --- despite
having suffered many perturbations, such as wars and famines --- over
thousands of years in many very different settings --- varying from ancient
Rome or Han China --- to barter economies in primitive cultures --- to
modern financial markets --- to opinion markets --- to markets used in AI
systems for attention allocation.  

As people from the Sante Fe institute have pointed out,  economies show
amazing emergent effects, such as --- dealing with complex issues like
allocating resources --- producing chains of suppliers for ingredients and
parts at various states along the production process --- and determining who
has which job --- much better than any planner.  And they involve hundreds
to billions of independent actors each with non-linear transactions --- such
as decisions to buy or sell --- with many other actors. 

This does not mean markets are not without disastrous instabilities --- just
that the damage of their instabilities are minor compared to the overall
benefit of their operation.  And now that we are beginning to learn how to
better control their instability, they are even less unstable than they have
generally been in the past. (Although currently the world markets are
cruising for a bruising because of things such as of America's insane
borrowing, and the massive percent of our equity that has gone into the
hands of speculative and manipulative hedge funds.)  

Or take the brain itself.  It is a complex system and yet it remains
relatively stably within reasonable bounds over the vast majority of the
lifetimes of the billions of people who have lived. In large part it does,
because of mechanisms for damping its behavior, and something equivalent ---
in the basil-ganglia and thalamus --- to markets for competing thoughts for
the allocation of the resource of attention and the potential for spreading
activation.

You don't mention that multiple AI and brain simulation programs --- that
have many or all of the features you imply are almost certain to produce
chaos --- have been run without such chaos.  

You don't mention that you, yourself, agreed in a response to a previous
email from me months ago that Hofstadter's Copycat, is, to a certain extent,
a Richard-complex program, and yet has shown itself to be quite reliable
in producing analogies that appear in some way appropriate.

And, finally, I found it odd that you ended this article citing Ben Goertzel
as your major evidence AGI systems such as the one he is designing are
almost certain to run into disastrous complexity Gotcha's, when he, himself,
does not --- and you failed to point that out in your article.

SO SINCE YOUR ANALYSIS TOTALLY FAILS TO DISCUSS THE OTHER SIDE OF THE
ARGUMENT IT IS MAKING, IT HAS TO BE TAKE AS LESS THAN DEFINITIVE DISCUSSION
OF ITS SUBJECT.

Ed Porter

P.S. Unfortunately, because of work, this is the end of my posting to this
list for at least today, and perhaps multiple days.  But I hope the rest of
you carry on, and I will try to at least read all the posts in this thread.

 

-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 21, 2008 10:02 PM
To: agi@v2.listbox.com
Subject: Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent
input and responses

Ed Porter wrote:
 Richard,  
 
 I read you Complex Systems, Artificial Intelligence and Theoretical
 Psychology article, and I still don't know what your are talking about
 other than the game of life.  I know you make a distinction between
Richard
 and non-Richard complexity.  I understand computational irreducibility.
And
 I understand that how complex a program is, in terms of its number of
lines
 is not directly related to how varied and unpredictable its output will
be.
 
 I would appreciate it, Richard, if you could explain what you mean by 
 Richard complexity vs. non-Richard complexity. 

[?]

Maybe you should get to me offlist about this.  I don't quite know that 
means.

Did you read the blog post on this topic?  It was supposed to be more 
accessible than the paper.

Blog is at susaro.com



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-22 Thread Richard Loosemore

Derek Zahn wrote:

Richard:  I get tripped up on your definition of complexity:
 
  A system contains a certain amount of complexity in it if it

  has some regularities in its overall behavior that are governed
  by mechanisms that are so tangled that, for all practical purposes,
  we must assume that we will never be able to find a closed-form
  explanation of how the global arises from the local.

on figuring out what counts as a regularity in overall behavior. 

Consider a craps table.  The trajectories of the dice would seem to have 
global regularities (for which craps players and normal people have 
words and phrases, like bouncing off the back, flying off the table, 
or whatever).  Our ability to create concepts around this activity would 
seem to imply the existence of global regularities (finding them is what 
we do when we make concepts).  Yet the behavior of those regularities is 
not just physical law but the specific configuration of the felt, the 
chips, the wind, and so forth, and all that data makes a closed-form 
explanation impractical.
 
Yet, I don't get the sense that this is what you mean by a complex 
system.  If it is, your contention that they are rare is certainly not 
correct, since many such examples can easily be found.  This aspect of 
complexity iillustrates the butterfly effect often used in discussions 
of complexity.
 
I'm not trying to be difficult; it's crucial for me to understand what 
you mean (versus my interpretation of what others have meant or my own 
internal definitions) if I am to follow your argument.


Okay, I will respond to your questions on two fronts (!) - I just posted 
a reply to your comment on the blog, too.


In the above, you mention butterfly effects.  This is not a mainstream 
example of complexity, it is chaos, which is not exactly the same thing.


More generally, you cannot say that a system is complex by itself, it is 
a system with respect to a particular regularity in its behavior.


The solar system, for example, is not complex:  the planets move in 
wonderfully predictable orbits.


BUT... actually the solar system *is* complex, because Pluto's behavior 
is unstable, and every once in a while it comes in and messes with 
everyone else.


So if the solar system remains utterly predictable for a hundred million 
years, and then Pluto goes AWOL for a few years, what is it?  It is 
partially complex, with just a tiny degree of complexity superimposed on 
otherwise non-complex behavior.  We cannot give a black and white answer 
to the question is it complex?.


Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Language learning

2008-04-22 Thread J. Andrew Rogers


On Apr 22, 2008, at 7:17 AM, Mark Waser wrote:
In my experience it is not so much that they sound the same but  
that  we don't know how to say them (in terms of mouth mechanics)  
such that  we can isolate the difference between sounds that would  
have been in  the range of a single phoneme in English.


No.  We have a Thai exchange student this year.  There are words  
that she swears are different that sound to me (and the rest of the  
family) to be exactly the same.


Precisely my point.  They sound exactly the same until you understand  
the mechanics of the sound generation, at which point you have a frame  
of reference for recognizing the differences.  The differences are  
there, you are just not using them as a means of discernment because  
you have no knowledge of which differences are important for  
discernment.  This is why it is futile and silly to use sound examples  
to teach someone a difference that we have already established they  
cannot isolate.  On the other hand, the phoneme generation mechanics  
are relatively unambiguous.


I could never hear many sounds until I figured out what they were  
doing to create the sound that was different from how I created the  
sound.  Once I figured that out, it became relatively easy to hear the  
difference because I knew what to listen for.


Austroasiatic languages (like Thai) tend to be particularly difficult  
for native English speakers because they tend to rely heavily on  
complex usage of all the possible bits that English speakers do not.   
However, having delved fairly deeply in one such language myself, it  
is easier than it seems at first once you figure it out.


J. Andrew Rogers

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Derek Zahn
Mark Waser: Huh? Why doesn't engineering discipline address building complex 
devices? 
 
Perhaps I'm wrong about that.  Can you give me some examples where engineering 
has produced complex devices (in the sense of complex that Richard means)? 
 
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Mark Waser
Computers.  Anything that involves aerodynamics.
  - Original Message - 
  From: Derek Zahn 
  To: agi@v2.listbox.com 
  Sent: Tuesday, April 22, 2008 5:20 PM
  Subject: RE: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT 
ARE THE MISSING ...]


  Mark Waser:

   Huh? Why doesn't engineering discipline address building complex devices? 
   
  Perhaps I'm wrong about that.  Can you give me some examples where 
engineering has produced complex devices (in the sense of complex that Richard 
means)? 
   
   


--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Derek Zahn
Me: Can you give me some examples where engineering 
 has produced complex devices (in the sense of complex 
 that Richard means)? 
Mark: Computers.  Anything that involves aerodynamics.
 
Richard, is this correct?  Are human-engineered airplanes complex in the sense 
you mean?
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Mark Waser
I don't know what is going to be more complex than a variable-geometry-wing 
aircraft like a F-14 Tomcat.  Literally nothing can predict it's aerodynamic 
behavior.  The avionics are purely reactive because it's future behavior cannot 
be predicted to any certainty even at computer speeds -- yet it's behavior 
envelope is small enough to be safe, provided you do have computer speeds 
(though no human can fly it unaided).
  - Original Message - 
  From: Derek Zahn 
  To: agi@v2.listbox.com 
  Sent: Tuesday, April 22, 2008 6:00 PM
  Subject: RE: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT 
ARE THE MISSING ...]


  Me:

   Can you give me some examples where engineering 
   has produced complex devices (in the sense of complex 
   that Richard means)? 

  Mark:

   Computers.  Anything that involves aerodynamics.
   
  Richard, is this correct?  Are human-engineered airplanes complex in the 
sense you mean?
   


--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Richard Loosemore

Derek Zahn wrote:

Me:

  Can you give me some examples where engineering
  has produced complex devices (in the sense of complex
  that Richard means)? 


Mark:

  Computers.  Anything that involves aerodynamics.
 
Richard, is this correct?  Are human-engineered airplanes complex in the 
sense you mean?


Generally speaking, no, not in a substantial enough way.

Which means that there is a certain amount of unpredictability in some 
details, and there are empirical factors that you need to use (tables of 
lift coefficients, etc.), but beyond these empirical factors there is 
little impact of the complexity.


The amount of complexity is almost trivial, compared with a system in 
which all the components are interacting with memory, development, 
nonlinearity, etc etc etc.


Don't forget that ALL systems are complex if you push them far enough, 
so it makes no sense to ask is system X complex?.  You can only ask 
how much complexity, and what role it plays in the system.





Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Derek Zahn
Mark Waser:



 I don't know what is going to be more complex than a variable-geometry-wing 
 aircraft like a F-14 Tomcat.  Literally nothing can predict it's aerodynamic 
 behavior.  
 The avionics are purely reactive because it's future behavior cannot be 
 predicted 
 to any certainty even at computer speeds -- yet it's behavior envelope is 
 small 
 enough to be safe, provided you do have computer speeds (though no human 
 can fly it unaided).
 
I agree that this is a very sensible way to think about being complex and it 
is certainly similar to the way I think about it myself.  My embryonic 
understanding of Richard's argument suggests to me that he means something 
else, though.  If not, traditional engineering methods are often pretty good at 
taming complexity as long as they take the range of possible system states into 
account (which is what you have been saying all along).
 
Since I'm trying (with limited success) to understand his point of view, I 
might suggest that (from the point of view of his argument), the global 
regularities of the aircraft (its flight characteristics) DO have a 
sufficiently-efficacious small theory in terms of the components (the aircraft 
body, including the moveable bits).  In fact, it is exactly that small theory 
which is embedded in the control program.  Since the global regularities 
(straight-line flight, turns, and so on) are sufficiently predictable from the 
local interactions of the control surfaces with the air, the aircraft is not 
complex *in the sense that Richard is talking about*.
 
Now I suppose I've pissed everybody off, but I'm really just trying to 
understand Richard's definitions so I can follow his argument.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Mark Waser
Richard, is this correct?  Are human-engineered airplanes complex in the 
sense you mean?


Generally speaking, no, not in a substantial enough way.

Which means that there is a certain amount of unpredictability in some 
details, and there are empirical factors that you need to use (tables of 
lift coefficients, etc.), but beyond these empirical factors there is 
little impact of the complexity.


Richard, you're obviously not familiar with high-speed aerodynamics.  There 
is not a certain amount of unpredictability.  It is out-and-out virtually 
unconstrained chaos.  There are *no* nice little tables of lift 
coefficients.  A human being cannot operate an F-14 by themselves.  A 
computer cannot operate an F-14 unless it is receiving sub-millisecond 
updates because the behavior is too chaotic to predict.  Yet, like 
everything else in nature, this seeming chaos is the result of a relatively 
small number of relatively simple rules (and a huge butterfly effect).  An 
F-14 in flight makes a system in which all the components are interacting 
with memory, development, nonlinearity, etc etc etc. look nearly trivial 
because virtually *anything* can effect it (temperature thermoclines, 
radiant heat differences because of changes in the land below, wind speed, 
clouds, even the passage of migratory birds) -- yet the behavior is entirely 
bounded enough for a fast reacting computer to manage it.


How is this not complex (according to your definition)?

The amount of complexity is almost trivial, compared with a system in 
which all the components are interacting with memory, development, 
nonlinearity, etc etc etc.


I believe that the pieces of intelligence can be uncoupled far more than 
you're ever going to be able to uncouple the factors hitting an aircraft at 
trans-sound speeds.


Don't forget that ALL systems are complex if you push them far enough, so 
it makes no sense to ask is system X complex?.  You can only ask how 
much complexity, and what role it plays in the system.


My point exactly. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


RE: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Derek Zahn
Richard Loosemore: it makes no sense to ask is system X complex?. You can 
only ask  how much complexity, and what role it plays in the system.
 
Yes, I apologize for my sloppy language.  When I say is system X complex? 
what I mean is whether the RL-complexity of the system is important in 
describing the behaviors of interest under the operating conditions being 
discussed, in particular whether the global behaviors have an effective small 
theory expressed in terms of local components and their interactions -- because 
my current understanding of what you mean by complexity means the extent to 
which no such small theory is available.
 
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Richard Loosemore

Derek Zahn wrote:

Mark Waser:

  I don't know what is going to be more complex than a 
variable-geometry-wing
  aircraft like a F-14 Tomcat.  Literally nothing can predict it's 
aerodynamic behavior. 
  The avionics are purely reactive because it's future behavior cannot 
be predicted
  to any certainty even at computer speeds -- yet it's behavior 
envelope is small

  enough to be safe, provided you do have computer speeds (though no human
  can fly it unaided).
 
I agree that this is a very sensible way to think about being complex 
and it is certainly similar to the way I think about it myself.  My 
embryonic understanding of Richard's argument suggests to me that he 
means something else, though.  If not, traditional engineering methods 
are often pretty good at taming complexity as long as they take the 
range of possible system states into account (which is what you have 
been saying all along).
 
Since I'm trying (with limited success) to understand his point of view, 
I might suggest that (from the point of view of his argument), the 
global regularities of the aircraft (its flight characteristics) DO have 
a sufficiently-efficacious small theory in terms of the components (the 
aircraft body, including the moveable bits).  In fact, it is exactly 
that small theory which is embedded in the control program.  Since the 
global regularities (straight-line flight, turns, and so on) are 
sufficiently predictable from the local interactions of the control 
surfaces with the air, the aircraft is not complex *in the sense that 
Richard is talking about*.
 
Now I suppose I've pissed everybody off, but I'm really just trying to 
understand Richard's definitions so I can follow his argument.


I read this after replying to Mark's later comment.

You have summarized exactly what I said there.

It is most important that, when answering these questions about whether 
or not system X is complex, we keep in mind that we have to choose our 
level of descriotion and then stick to it.


So in this case the system as a whole is not complex.  A component of it 
is complex (though not in a very demanding way, compared with many 
complex systems), but if we accidentally slip from discussion of one to 
discussion of the other, things do get confused.


So Mark is right to see complexity, but that is one level down.



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Thoughts on the Zahn take on Complex Systems [WAS Re: [agi] WHAT ARE THE MISSING ...]

2008-04-22 Thread Richard Loosemore

Mark Waser wrote:
Richard, is this correct?  Are human-engineered airplanes complex in 
the sense you mean?


Generally speaking, no, not in a substantial enough way.

Which means that there is a certain amount of unpredictability in some 
details, and there are empirical factors that you need to use (tables 
of lift coefficients, etc.), but beyond these empirical factors there 
is little impact of the complexity.


Richard, you're obviously not familiar with high-speed aerodynamics.  
There is not a certain amount of unpredictability.  It is out-and-out 
virtually unconstrained chaos.  There are *no* nice little tables of 
lift coefficients.  A human being cannot operate an F-14 by themselves.  
A computer cannot operate an F-14 unless it is receiving sub-millisecond 
updates because the behavior is too chaotic to predict.  Yet, like 
everything else in nature, this seeming chaos is the result of a 
relatively small number of relatively simple rules (and a huge butterfly 
effect).  An F-14 in flight makes a system in which all the components 
are interacting with memory, development, nonlinearity, etc etc etc. 
look nearly trivial because virtually *anything* can effect it 
(temperature thermoclines, radiant heat differences because of changes 
in the land below, wind speed, clouds, even the passage of migratory 
birds) -- yet the behavior is entirely bounded enough for a fast 
reacting computer to manage it.


How is this not complex (according to your definition)?


Remember that the strict definition of complexity asks whether a 
theory can be found to predict the overall behavior.


In this case, the engineers DO have a theory, because they were able to 
build a flight control computer to make sensible adaptations to overcome 
the instability of the system.  If they did not have such a theory, they 
would not have been able to write any flight control software at all.


The system does indeed have some complexity in it (all systems do, 
remember), but the engineers found enough predictability in the system 
that they were able to write the control software and treat the 
complexity as a noise signal that had to be compensated for.  So at the 
most important level of description, the system is not complex.


My point is that to be able to make the plane fly straight, the 
engineers did not have to second-guess anything complex  they did 
not have to make any predictions about whether a particular bit of the 
plane was going to exhibit [Behavior A], they just had to wait to see 
which behavior was going to turn up, then make the appropriate reaction 
to it (and the engineers know what the appropriate reaction is, of 
course).  The engineers are not second-guessing the complexity, they are 
factoring it out.  They are making it irrelevant by simply compensating 
for it.  They are turning it into a noise signal.


So the plane's behavior does not depend on the complexity in any way, 
because the whole point of the flight control computer is to watch the 
complex behavior like crazy (several times a millisecond, as you say) 
and simply counteract it.


The fact that they were able to counteract the instability tells us that 
there was a lot about the plane's dynamics that was extremely 
predictable (or else no rational compensation software would have been 
possible).


And once the system has been built with [complex-behaving plane] PLUS 
[complexity-cancelling software], the result is an overall system that 
is not complex.


Is the math underlying the F-14 untouchable?  No:  there is enough 
regular math to enable the engineers to write that flight control 
software.  Looking at the math AT THAT LEVEL OF DESCRIPTION, we would 
never have predicted that this system was complex:  we would have 
predicted some instability caused by a complex component, but the rest 
of the math would have caused us to predict that the system would not be 
complex as a whole.  So, this system is consistent with my observation 
that untouchable math begets complexity, and that touchable math is 
consistent with non-complexity.


One last note:  remember that we have to look at the system as a whole. 
 We can always dip down into a system and find some complexity, but 
that would be to change the terms of reference.





Richard Loosemore



Stepping back to the intelligent systems context:  you cannot pull this 
trick of compensating for the complexity in an AGI.  There is simply no 
analogy between these two systems.  Build an intelligent system in which 
something cancels out all the annoying influence of the symbols, with 
their complex interactions, so that all of that symbol-stuff can be 
treated as noise and the system as a whole becomes non-complex?  Makes 
no sense.  The symbols and their interactions are the very core of the 
system's intelligence.  You cannot factor them out.















---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: