Re: [agi] Complexity is in the system, not the rules themselves

2008-04-30 Thread Richard Loosemore

Vladimir Nesov wrote:

Richard,

These last two messages with replies to Mark's questions clarify your
position more clearly than much of your prior writing (although I
didn't keep track of later discussions too closely). I think it's
important to show in the same example all the controversial aspects:
relatively simple rules, use cases where an aspect of global behavior
can be modeled by a simple theory (two-body problem, F-14, most of the
planets in short term, gliders in GoL), and use cases for the same
global system where there is no simple model (n-body problem, Pluto,
more general initial state in GoL).


Yes, I am coming to the view that this stuff needs to be explained with 
many examples, if the message has any hope of getting across.


More generally, I find it incredibly strange that these complex system 
ideas cause *so* much consternation.  Back in the early 90s I read all 
about the early history of complex systems research, and it was 
noticeable that these ideas provoked some extremely strong reactions. 
People didn't just disagree with the ideas, they were besides themselves 
with fury.  (I am not saying that Mark is doing that, btw, I'm talking 
about the broader reaction).


The funny thing is that I thought all of that was over, and that people 
now understood what the deal was with complex systems, but what I am 
finding is that I am fighting exactly the same battles as the earlier 
folks did, back in the 80s and 90s.




But all the same, problems that you describe as complex are just
numerical calculation problems. In the case of symbol interaction,
initial conditions (rules) are unknown and results are discontinuous,
which requires much methodical enumeration to find the rules that give
required global behavior, no clever tricks work.


I am not quite sure what you mean by this, but my general answer is that 
it really is not a matter of numerical calculation problems.


I think the basic idea that nobody gets (because everyone just dances 
around the issue) is that if you had a God's-eye perspective, you would 
be able to plot a distribution graph showing the amount of difficulty 
that humans have in understanding various kinds of systems (natural and 
artificial).


Looking at that distribution, you would see that most of nature's 
systems just happen to be clustered in a hump quite close to the origin 
(i.e. they are low-difficulty), whereas most of the artificial systems 
in the universe are way, way off up at the high ened of the scale, in a 
second 'hump'.  What this means is that there are two qualitatively 
different types of systems in the universe:  low-difficulty ones, and a 
second group of extremely high-difficulty ones.


But the problem is that people assume that this graph does not have two 
humps, but is in fact continuous, and that as time goes on our ability 
to understand systems further up the graph becomes greater.  According 
to that idea science is a relentless march into higher and higher 
regions of this difficulty-space, so if you came back in a hundred 
years' time you would find that people are routinely deciphering systems 
that today require superhuman intellect and then in a thousand years 
time our elementary school kids will learn String Theory (if it survives 
that long!), and so on.


This view of the relentless march of the human intellect is so strong 
that I think it comes as a shock to people to be told that things might 
be different, and that it might be trivially easy to create systems of a 
certain sort which have a difficulty-level that is so far off the scale 
that we do not know where to start analysing them, and we may *never* 
know how to analyze them.


But this is exactly what the complex systems idea is about.  It really 
is almost trivial to build an artificial system in which the overall, 
global behavior of the system is interesting and regular and lawful, 
but where we have no idea how to prove that this behavior should emerge 
from the local rules.


In that context, it would be a complete misunderstanding to say that the 
problems that you describe as complex are just numerical calculation 
problems.  If all you mean is that we can simulate them if we want to 
understand them (the way we simulate the weather in order to predict 
it), then this is true, but in the context of the problem we have - the 
problem of building intelligent systems - this fact is of no practical use.






Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Complexity is in the system, not the rules themselves

2008-04-30 Thread Jim Bromer
- Original Message 
Vladimir Nesov [EMAIL PROTECTED] said:

In the case of symbol interaction,
initial conditions (rules) are unknown and results are discontinuous,
which requires much methodical enumeration to find the rules that give
required global behavior, no clever tricks work.

Vladimir Nesov

This is interesting although I had to interpret your comments a little bit. 
Most symbolic interactions are not numerically commensurate or 'miscible' (so 
to speak) so they can require a great many methodical operations in order to 
understand their 'behaviors' at a relatively more global behavior.  However, I 
think this is a problem that can be dealt with.  For one thing, many problems 
can be generalized through various associative methods and that is a trick that 
does work up to a point.  I suspect that we will eventually discover more 
sophisticated ways to mix a variety of methods of generalization so that even 
when the combination of data references, reasons, correlations and knowledge of 
other relations does not produce an easily understandable object of reference, 
simplifications of the object can be formed using approximations such as 
approximate correlations.  But I think your insight that since interactive 
symbolic references are not necessarily
 'continuous' in some way they may require more elaborate methodologies to 
understand them is important.
Jim Bromer


  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Complexity is in the system, not the rules themselves

2008-04-30 Thread Vladimir Nesov
On Thu, May 1, 2008 at 12:04 AM, Jim Bromer [EMAIL PROTECTED] wrote:

 This is interesting although I had to interpret your comments a little bit.
 Most symbolic interactions are not numerically commensurate or 'miscible'
 (so to speak) so they can require a great many methodical operations in
 order to understand their 'behaviors' at a relatively more global behavior.
 However, I think this is a problem that can be dealt with.  For one thing,
 many problems can be generalized through various associative methods and
 that is a trick that does work up to a point.  I suspect that we will
 eventually discover more sophisticated ways to mix a variety of methods of
 generalization so that even when the combination of data references,
 reasons, correlations and knowledge of other relations does not produce an
 easily understandable object of reference, simplifications of the object can
 be formed using approximations such as approximate correlations.  But I
 think your insight that since interactive symbolic references are not
 necessarily 'continuous' in some way they may require more elaborate
 methodologies to understand them is important.Jim Bromer


I should add a disclaimer that my comment was based on assumptions
that I personally don't agree with, but which I see as underlying
Richard's position, which he in turn doesn't really concede...

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Complexity is in the system, not the rules themselves

2008-04-30 Thread Richard Loosemore

Vladimir Nesov wrote:

On Thu, May 1, 2008 at 12:04 AM, Jim Bromer [EMAIL PROTECTED] wrote:

This is interesting although I had to interpret your comments a little bit.
Most symbolic interactions are not numerically commensurate or 'miscible'
(so to speak) so they can require a great many methodical operations in
order to understand their 'behaviors' at a relatively more global behavior.
However, I think this is a problem that can be dealt with.  For one thing,
many problems can be generalized through various associative methods and
that is a trick that does work up to a point.  I suspect that we will
eventually discover more sophisticated ways to mix a variety of methods of
generalization so that even when the combination of data references,
reasons, correlations and knowledge of other relations does not produce an
easily understandable object of reference, simplifications of the object can
be formed using approximations such as approximate correlations.  But I
think your insight that since interactive symbolic references are not
necessarily 'continuous' in some way they may require more elaborate
methodologies to understand them is important.Jim Bromer



I should add a disclaimer that my comment was based on assumptions
that I personally don't agree with, but which I see as underlying
Richard's position, which he in turn doesn't really concede...



And, sadly, none of this really addresses anything I said ;-).

Oh well.




Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Complexity is in the system, not the rules themselves

2008-04-29 Thread Richard Loosemore

Mark Waser wrote:

If I understand Richard correctly, he is assuming that it is
necessary to make symbols themselves complex and that each symbol
needs his four forces of doom: Memory, Development, Identity, and
Non-Linearity.

I have no problem with the first three but am not so sure that I
agree with the non-linearity.  Certainly, the interactions between
symbols are non-linear but I believe that they are reasonably bounded
-- particularly if you use some intelligent design principles (pun
intended).  For example, nature re-uses virtually everything -- I
have to believe that this applies to cognition as well.  Similarly,
look at software design patterns (as per Gamma, et. al.).  I don't
believe at all that rules governing the behavior of inter-symbol
interactions are necessarily complex.  I believe that inter-symbol
interaction will eventually be soluble with a reasonable number of
rules (and rules generated from those rules).  Just like gravity, the
behavior generated by the rules WILL be complex but the rules will
not.  And just like gravity, there will be more than enough
regularity that we will be able to predict and control the stability
of inter-symbol interaction *as long as* we understand the rules well
enough.



More than once in your recent posts, you have said one particular thing 
that does not make any sense to me, so I need to focus on it.


What you said in the above case was I don't believe ... that rules 
governing the behavior of inter-symbol interactions are necessarily 
complex.


The problem with this statement is that strictly speaking one can never 
say that the RULES governing a system are complex.


Now, before you jump on me (because I have probably made the same 
mistake), I should say that we sometimes talk that way as a kind of 
shorthand, but right now we must tread very carefully, so I am going to 
be very precise:


The rules that govern a system are just rules - they are not, by 
themselves, complex.  The SYSTEM can be complex (meaning:  you cannot 
understand global behavior from local rules), but the rules themselves 
are not complex.


But then what can you say about the rules?  What you can say about them 
is whether or not they seem likely to generate complexity.  Certain 
kinds of simple, linear, elegant and separable rules tend not to 
generate complexity, but other kinds of ugly, tangled rules do tend to 
generate complexity in the system as a whole.


What do I mean by ugly, tangled rules?  Well, that was the whole point 
of me listing the so-called four forces of doom.  That list of rule 
characteristics:


  - Memory
  - Development
  - Identity
  - Nonlinearity

... is just the sort that tends to make the system as a whole complex. 
These rules are not complex by themselves, it is just that in our 
empirical studies of large numbers of experimental systems, putting 
THOSE kinds of rules in tends to make the system as a whole behave in a 
complex way.  Most often it makes the system just random, of course! 
But if complexity is going to happen, it is usually because the rules 
have one or more of those features.


So, to illustrate why this is a big deal, look at the quote above:  you 
say that



I have no problem with the first three but am not so sure that I
agree with the non-linearity.  Certainly, the interactions between
symbols are non-linear but I believe that they are reasonably bounded...


This is not something you can defend:  if you think that the rules that 
govern the behavior of symbols do tend to have three of the four 
characteristics, then you must expect that the system as a whole will be 
complex, because this is just an empirical fact.


In particular, you cannot say ... the interactions between symbols are 
non-linear but I believe that they are reasonably bounded 
Reasonably bounded?  That does not buy you anything at all:  we can put 
the tiniest amount of nonlinearity into a system and leave out all the 
others, and the system still can be complex!


Now, it is certainly true that we sometimes utter phrases like the 
rules governing the system are complex, but that is sloppy, because 
what we mean is that the rules have enough of these characteristics that 
the system is complex.  I sometimes do this myself, even though I 
shouldn't, but it is generally harmless.


So when you say:

 Just like gravity, the
 behavior generated by the rules WILL be complex but the rules will
 not.

... I have to say that this is a meaningless statement on two counts. 
First of all, if the rules have some of those four complexity-generating 
characteristics, the system as a whole will almost always be either 
complex or random-and-boring.  We just do not know of any (many?) 
examples of a system that has those four in the elements, but where the 
system as a whole is easily predictable or analysable from its element 
rules!  For anyone to say that they believe that intelligent systems 
will be the exception is to fly in the face of all empirical 

Re: [agi] Complexity is in the system, not the rules themselves

2008-04-29 Thread Mark Waser

I'm afraid that I'm losing track of your major point but . . . .

First off, you are violating your own definition of complexity . . . .

You said -- A system is deemed complex if the smallest size of a theory 
that will explain that system is so large that, for today's human minds, the 
discovery of that theory is simply not practical. Notice that this 
definition does not imply that there any such systems in the real world, it 
just says that *if* the theory size were ever to go off the scale *then* the 
system would (by definition) be complex.


By this definition, gravity is not complex.  Yet, below you are arguing that 
it is, at least, a little bit complex (which seems to be getting more and 
more analogous to a little bit pregnant :-).


Second, you keep whip-sawing between dismissing obviously complex systems 
like the adaptive aerodynamics of an F-14 as not complex (because whatever 
that complexity was, it was simple and predictable enough that the control 
software could actually be written and the complexity could be cancelled 
out.) and then saying that the least little bit of complexity will make an 
AI virtually impossible to design.


You can't have it both ways.  WHY is it that engineers can manage the 
complexity of high-speed adaptive aerodynamics yet you are absolutely 
positive that they can't do the same thing for intelligence?


I think that the shoe is really on the other foot . . . . what problems 
*haven't* been eventually solved once we learn enough?  True -- intelligence 
is the mother of all problems, but that doesn't mean that it's too difficult 
to engineer (like virtually anything else that humankind has put its mind 
to).




- Original Message - 
From: Richard Loosemore [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, April 29, 2008 7:52 PM
Subject: [agi] Complexity is in the system, not the rules themselves



Mark Waser wrote:

If I understand Richard correctly, he is assuming that it is
necessary to make symbols themselves complex and that each symbol
needs his four forces of doom: Memory, Development, Identity, and
Non-Linearity.

I have no problem with the first three but am not so sure that I
agree with the non-linearity.  Certainly, the interactions between
symbols are non-linear but I believe that they are reasonably bounded
-- particularly if you use some intelligent design principles (pun
intended).  For example, nature re-uses virtually everything -- I
have to believe that this applies to cognition as well.  Similarly,
look at software design patterns (as per Gamma, et. al.).  I don't
believe at all that rules governing the behavior of inter-symbol
interactions are necessarily complex.  I believe that inter-symbol
interaction will eventually be soluble with a reasonable number of
rules (and rules generated from those rules).  Just like gravity, the
behavior generated by the rules WILL be complex but the rules will
not.  And just like gravity, there will be more than enough
regularity that we will be able to predict and control the stability
of inter-symbol interaction *as long as* we understand the rules well
enough.



More than once in your recent posts, you have said one particular thing 
that does not make any sense to me, so I need to focus on it.


What you said in the above case was I don't believe ... that rules 
governing the behavior of inter-symbol interactions are necessarily 
complex.


The problem with this statement is that strictly speaking one can never 
say that the RULES governing a system are complex.


Now, before you jump on me (because I have probably made the same 
mistake), I should say that we sometimes talk that way as a kind of 
shorthand, but right now we must tread very carefully, so I am going to be 
very precise:


The rules that govern a system are just rules - they are not, by 
themselves, complex.  The SYSTEM can be complex (meaning:  you cannot 
understand global behavior from local rules), but the rules themselves are 
not complex.


But then what can you say about the rules?  What you can say about them is 
whether or not they seem likely to generate complexity.  Certain kinds of 
simple, linear, elegant and separable rules tend not to generate 
complexity, but other kinds of ugly, tangled rules do tend to generate 
complexity in the system as a whole.


What do I mean by ugly, tangled rules?  Well, that was the whole point 
of me listing the so-called four forces of doom.  That list of rule 
characteristics:


  - Memory
  - Development
  - Identity
  - Nonlinearity

... is just the sort that tends to make the system as a whole complex. 
These rules are not complex by themselves, it is just that in our 
empirical studies of large numbers of experimental systems, putting THOSE 
kinds of rules in tends to make the system as a whole behave in a complex 
way.  Most often it makes the system just random, of course! But if 
complexity is going to happen, it is usually because the rules have one or 
more of those 

Re: [agi] Complexity is in the system, not the rules themselves

2008-04-29 Thread Richard Loosemore

Mark Waser wrote:

I'm afraid that I'm losing track of your major point but . . . .

First off, you are violating your own definition of complexity . . . .

You said -- A system is deemed complex if the smallest size of a 
theory that will explain that system is so large that, for today's human 
minds, the discovery of that theory is simply not practical. Notice that 
this definition does not imply that there any such systems in the real 
world, it just says that *if* the theory size were ever to go off the 
scale *then* the system would (by definition) be complex.


By this definition, gravity is not complex.  Yet, below you are arguing 
that it is, at least, a little bit complex (which seems to be getting 
more and more analogous to a little bit pregnant :-).


No, wait, this is not right.

When we talk of 'gravity' we often mean solar-system dynamics (remember, 
the SYSTEM is what we need to talk about, not the RULES  gravity 
itself is just the low level mechanism, i.e. the rules).  But solar 
systems are a special case of a gravitational system in which most of 
the behavior is analyzable (thanks to Newton).  As I said, if you 
consider the general case of an n-body system then it is fully and 
completely complex.


But when I say that gravity (by which I mean the solar system) is 
*partially* complex, I mean that when the orbits are as badly behaved as 
Pluto's is, the system is unstable.  In the specific case of our solar 
system the presence of Pluto means that the dynamics become grossly 
unpredictable once in a while.  That is the 'little bit of complexity'.


That idea of partial complexity is not to be sniffed at.  This is not 
like partial pregnancy.  It just means that we can explain some fraction 
of the system's behavior, but not all of it.  Or, that we can explain it 
most of the time completely, but some of the time the explanation breaks 
down.


Hope that clears it up:  I think I have stayed consistent with the 
original definition I laid out.



Second, you keep whip-sawing between dismissing obviously complex 
systems like the adaptive aerodynamics of an F-14 as not complex 
(because whatever that complexity was, it was simple and predictable 
enough that the control software could actually be written and the 
complexity could be cancelled out.) and then saying that the least 
little bit of complexity will make an AI virtually impossible to design.


You can't have it both ways.  WHY is it that engineers can manage the 
complexity of high-speed adaptive aerodynamics yet you are absolutely 
positive that they can't do the same thing for intelligence?


The problem is that you cannot look all systems that are complex as if 
they are all complex in the same way.  Each one must be examined for its 
own particular characteristics.


In the case of the F-14, it is not the case that there are large numbers 
of elements that each interact with the others in ways that give rise to 
the worst kinds of complexity.  The plane's designers treat the system 
as having only TWO components:  the plane's body and the environment, 
with the environment having an unpredictable effect on the plane (it is 
a noisy signal).  They simply [sic!] build a reactive system into the 
plane so that the plane is measuring the behavior of teh environment and 
cancelling it out all the time.  These two system components do not 
interact with one another in a way that includes any of the elements 
that give rise to complexity:  the plane's control system just does one 
simple function, and that is to cancel out all fluctuations to make the 
plane fly straight.


In this case there is a clear situation in which the complexity and 
randomness is ignored  ... the *content* of that randomness and 
complexity is of no significance whatsoever, because the control system 
is designed to do only one thing, and that is to cancel it out.  A the 
level of the control system, this F-14 is diabolically simple, it is not 
complex.


The engineers do *not* manage the complexity, they ignore it 
completely, and pretend that it is just a random signal (in fact, it may 
be just a random signal with no structure, for all I know:  I have not 
investigated in detail).


By contrast (and as I did say before), in the case of intelligence the 
complexity is happening in and among the very things that cannot be 
cancelled out.  It is impossible to build an AGI by putting into one box 
all of the symbols and symbol-mechanisms that might possibly cause any 
complexity, then have an outside system treat that entire box as if it 
were just a noise source!  That outside system would just do its best to 
pretend that all the stuff going on with the symbols was meaningless 
noise, cancel it out, and delivery a final output from the system that 
was  well, what?  Nothing.  The AGI would do nothing intelligent. 
All symbol activity would have to be cancelled by the compensating 
mechanism.


And within those symbols, there would be huge 

Re: [agi] Complexity is in the system, not the rules themselves

2008-04-29 Thread Vladimir Nesov
Richard,

These last two messages with replies to Mark's questions clarify your
position more clearly than much of your prior writing (although I
didn't keep track of later discussions too closely). I think it's
important to show in the same example all the controversial aspects:
relatively simple rules, use cases where an aspect of global behavior
can be modeled by a simple theory (two-body problem, F-14, most of the
planets in short term, gliders in GoL), and use cases for the same
global system where there is no simple model (n-body problem, Pluto,
more general initial state in GoL).

But all the same, problems that you describe as complex are just
numerical calculation problems. In the case of symbol interaction,
initial conditions (rules) are unknown and results are discontinuous,
which requires much methodical enumeration to find the rules that give
required global behavior, no clever tricks work.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com