Re: [agi] Approximations of Knowledge

2008-07-03 Thread Russell Wallace
On Wed, Jul 2, 2008 at 5:31 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 Nevertheless, generalities among different instances of complex systems have 
 been identified, see for instance:

 http://en.wikipedia.org/wiki/Feigenbaum_constants

To be sure, but there are also plenty of complex systems where
Feigenbaum's constants don't arise. I'm not saying there aren't
theories that say things about more than one complex system - clearly
there are - only that there aren't any that say nontrivial things
about complex systems in general.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-07-03 Thread Terren Suydam

That may be true, but it misses the point I was making, which was a response to 
Richard's lament about the seeming lack of any generality from one complex 
system to the next. The fact that Feigenbaum's constants describe complex 
systems of different kinds is remarkable because it suggests an underlying 
order among systems that are described by different equations. It is not 
unreasonable to imagine that in the future we will develop a much more robust 
mathematics of complex systems.

--- On Thu, 7/3/08, Russell Wallace [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
 
  Nevertheless, generalities among different instances
 of complex systems have been identified, see for instance:
 
  http://en.wikipedia.org/wiki/Feigenbaum_constants
 
 To be sure, but there are also plenty of complex systems
 where
 Feigenbaum's constants don't arise. I'm not
 saying there aren't
 theories that say things about more than one complex system
 - clearly
 there are - only that there aren't any that say
 nontrivial things
 about complex systems in general.
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-07-01 Thread Russell Wallace
On Mon, Jun 30, 2008 at 8:10 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 My scepticism comes mostly from my personal observation that each complex
 systems scientist I come across tends to know about one breed of complex
 system, and have a great deal to say about that breed, but when I come to
 think about my preferred breed (AGI, cognitive systems) I cannot seem to
 relate their generalizations to my case.

That's not very surprising if you think about it. Suppose we postulate
the existence of a grand theory of complexity. That's a theory of
everything that is not simple (in the sense being discussed here) -
but a theory that says something about _every nontrivial thing in the
entire Tegmark multiverse_ is rather obviously not going to say very
much about any particular thing.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-07-01 Thread Terren Suydam

Nevertheless, generalities among different instances of complex systems have 
been identified, see for instance:

http://en.wikipedia.org/wiki/Feigenbaum_constants

Terren

--- On Tue, 7/1/08, Russell Wallace [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
  My scepticism comes mostly from my personal
 observation that each complex
  systems scientist I come across tends to know about
 one breed of complex
  system, and have a great deal to say about that breed,
 but when I come to
  think about my preferred breed (AGI, cognitive
 systems) I cannot seem to
  relate their generalizations to my case.
 
 That's not very surprising if you think about it.
 Suppose we postulate
 the existence of a grand theory of complexity. That's a
 theory of
 everything that is not simple (in the sense being discussed
 here) -
 but a theory that says something about _every nontrivial
 thing in the
 entire Tegmark multiverse_ is rather obviously not going to
 say very
 much about any particular thing.
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-30 Thread Brad Paulsen

Richard,

Thanks for your comments.  Very interesting.  I'm looking forward to reading the 
introductory book by Waldrop.  Thanks again!


Cheers,

Brad


Richard Loosemore wrote:

Brad Paulsen wrote:

Richard,

I think I'll get the older Waldrop book now because I want to learn 
more about the ideas surrounding complexity (and, in particular, its 
association with, and differentiation from, chaos theory) as soon as 
possible.  But, I will definitely put an entry in my Google calendar 
to keep a lookout for the new book in 2009.


Thanks very much for the information!

Cheers,

Brad


You're welcome.  I hope it is not a disappointment:  the subject is a 
peculiar one, so I believe that it is better to start off with the kind 
of journalistic overview that Waldrop gives.  Let me know what your 
reaction is.


Here is the bottom line.  At the core of the complex systems idea there 
is something very significant and very powerful, but a lot of people 
have wanted it to lead to a new science just like some of the old 
science.  In other words, they have wanted there to be a new, fabulously 
powerful 'general theory of complexity' coming down the road.


However, no such theory is in sight, and there is one view of complexity 
(mine, for example) that says that there will probably never be such a 
theory.  If this were one of the traditional sciences, the absence of 
that kind of progress toward unification would be a sign of trouble - a 
sign that this was not really a new science after all.  Or, even worse, 
a sign that the original idea was bogus.  But I believe that is the 
wrong interpretation to put on it.  The complexity idea is very 
significant, but it is not a science by itself.


Having said all of that, there are many people who so much want there to 
be a science of complexity (enough of a science that there could be an 
institute dedicated to it, where people have real jobs working on 
'complex systems'), that they are prepared to do a lot of work that 
makes it look like something is happening.  So, you can find many 
abstract papers about complex dynamical systems, with plenty of 
mathematics in them.  But as far as I can see, most of that stuff is 
kind of peripheral ... it is something to do to justify a research program.


At the end of the day, I think that the *core* complex systems idea will 
outlast all this other stuff, but it will become famous for its impact 
on oter sciences, rather than for the specific theories of 'complexity' 
that it generates.



We will see.



Richard Loosemore







Richard Loosemore wrote:

Brad Paulsen wrote:

Or, maybe...

Complexity: Life at the Edge of Chaos
Roger Lewin, 2000 $10.88 (new, paperback) from Amazon (no used copies)
Complexity: Life at the Edge of Chaos by Roger Lewin (Paperback - 
Feb 15, 2000)


Nope, not that one either!

Darn.

I think it may have been Simplexity (Kluger), but I am not sure.

Interestingly enough, Melanie Mitchell has a book due out in 2009 
called The Core Ideas of the Sciences of Complexity.  Interesting 
title, given my thoughts in the last post.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-30 Thread Richard Loosemore

Terren Suydam wrote:

Hi Richard,

I'll de-lurk here to say that I find this email to be utterly reasonable, and 
that's with my crackpot detectors going off a lot lately, no offense to you of 
course.

I do disagree that complexity is not its own science. I'm not wedded to the idea, like the folks you profile in your email, but I think its contribution has been small because it's in its infancy. We've been developing reductionist tools for hundreds of years now. I think we're in the equivalent of the pre-calculus days when it comes to complexity science. 

And we haven't made much progress because the traditional scientific method depends on direct causal linkages. On the contrary, complex systems exhibit behavior at a global level that is not predictable from the local level... so there's a causal relationship only in the weakest sense. It's much more straightforward, I think, to say that the two levels, the global and the local, are causally orthogonal to one another. Both levels can be described by completely independent causal dynamics. 

It's a new science because it's a new method. Isolating variables to determine relationships doesn't lend itself well to massively parallel networks that are just lousy with feedback, because it's impossible to hold the other values still, and worse, the behavior is sensitive to experimental noise. You could write a book on the difference between the traditional scientific method and the methods for studying complexity. I'm sure it's been done, actually. 


The study of complexity will eventually fulfill its potential as a new science, 
because if we are ever to understand the brain and the mind and model them with 
any real precision, it will be due to complexity science *as much as* 
traditional reductionist science. We need the benefit of both to gain real 
understanding where traditional science has failed.

Our human minds are simply too limited to grasp the enormity of the scale of 
complexity within a single cell, much less a collection of a few trillion of 
them, also arranged in an unfathomably complex arrangement.

The idea that complexity science will *not* figure prominently into the study 
of the body, the brain, and the mind, is an absurd proposition to me. We will 
be going in the right direction when more and more of us are simulating 
something without any clue what the result will be.

That's all for now... thanks for your post Richard.



Thanks Terren

Yes, in my more optimistic moments I believe that a full science of 
complexity will come about.  It may redefine the meaning of 'science' 
though.


My scepticism comes mostly from my personal observation that each 
complex systems scientist I come across tends to know about one breed of 
complex system, and have a great deal to say about that breed, but when 
I come to think about my preferred breed (AGI, cognitive systems) I 
cannot seem to relate their generalizations to my case.


That is not to say that things will not converge, though.  I should be 
careful not to prejudge something so young.




Richard Loosemore





--- On Sun, 6/29/08, Richard Loosemore [EMAIL PROTECTED] wrote:


From: Richard Loosemore [EMAIL PROTECTED]
Subject: Re: [agi] Approximations of Knowledge
To: agi@v2.listbox.com
Date: Sunday, June 29, 2008, 9:23 PM
Brad Paulsen wrote:

Richard,

I think I'll get the older Waldrop book now
because I want to learn more 

about the ideas surrounding complexity (and, in
particular, its 

association with, and differentiation from, chaos
theory) as soon as 

possible.  But, I will definitely put an entry in my
Google calendar to 

keep a lookout for the new book in 2009.

Thanks very much for the information!

Cheers,

Brad
You're welcome.  I hope it is not a disappointment: 
the subject is a 
peculiar one, so I believe that it is better to start off
with the kind 
of journalistic overview that Waldrop gives.  Let me know
what your 
reaction is.


Here is the bottom line.  At the core of the complex
systems idea there 
is something very significant and very powerful, but a lot
of people 
have wanted it to lead to a new science just like some of
the old 
science.  In other words, they have wanted there to be a
new, fabulously 
powerful 'general theory of complexity' coming down

the road.

However, no such theory is in sight, and there is one view
of complexity 
(mine, for example) that says that there will probably
never be such a 
theory.  If this were one of the traditional sciences, the
absence of 
that kind of progress toward unification would be a sign of
trouble - a 
sign that this was not really a new science after all.  Or,
even worse, 
a sign that the original idea was bogus.  But I believe
that is the 
wrong interpretation to put on it.  The complexity idea is
very 
significant, but it is not a science by itself.


Having said all of that, there are many people who so much
want there to 
be a science of complexity (enough of a science that there
could be an 
institute

Re: [agi] Approximations of Knowledge

2008-06-29 Thread Richard Loosemore

Brad Paulsen wrote:

Richard,

I think I'll get the older Waldrop book now because I want to learn more 
about the ideas surrounding complexity (and, in particular, its 
association with, and differentiation from, chaos theory) as soon as 
possible.  But, I will definitely put an entry in my Google calendar to 
keep a lookout for the new book in 2009.


Thanks very much for the information!

Cheers,

Brad


You're welcome.  I hope it is not a disappointment:  the subject is a 
peculiar one, so I believe that it is better to start off with the kind 
of journalistic overview that Waldrop gives.  Let me know what your 
reaction is.


Here is the bottom line.  At the core of the complex systems idea there 
is something very significant and very powerful, but a lot of people 
have wanted it to lead to a new science just like some of the old 
science.  In other words, they have wanted there to be a new, fabulously 
powerful 'general theory of complexity' coming down the road.


However, no such theory is in sight, and there is one view of complexity 
(mine, for example) that says that there will probably never be such a 
theory.  If this were one of the traditional sciences, the absence of 
that kind of progress toward unification would be a sign of trouble - a 
sign that this was not really a new science after all.  Or, even worse, 
a sign that the original idea was bogus.  But I believe that is the 
wrong interpretation to put on it.  The complexity idea is very 
significant, but it is not a science by itself.


Having said all of that, there are many people who so much want there to 
be a science of complexity (enough of a science that there could be an 
institute dedicated to it, where people have real jobs working on 
'complex systems'), that they are prepared to do a lot of work that 
makes it look like something is happening.  So, you can find many 
abstract papers about complex dynamical systems, with plenty of 
mathematics in them.  But as far as I can see, most of that stuff is 
kind of peripheral ... it is something to do to justify a research program.


At the end of the day, I think that the *core* complex systems idea will 
outlast all this other stuff, but it will become famous for its impact 
on oter sciences, rather than for the specific theories of 'complexity' 
that it generates.



We will see.



Richard Loosemore







Richard Loosemore wrote:

Brad Paulsen wrote:

Or, maybe...

Complexity: Life at the Edge of Chaos
Roger Lewin, 2000 $10.88 (new, paperback) from Amazon (no used copies)
Complexity: Life at the Edge of Chaos by Roger Lewin (Paperback - Feb 
15, 2000)


Nope, not that one either!

Darn.

I think it may have been Simplexity (Kluger), but I am not sure.

Interestingly enough, Melanie Mitchell has a book due out in 2009 
called The Core Ideas of the Sciences of Complexity.  Interesting 
title, given my thoughts in the last post.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-29 Thread Terren Suydam

Hi Richard,

I'll de-lurk here to say that I find this email to be utterly reasonable, and 
that's with my crackpot detectors going off a lot lately, no offense to you of 
course.

I do disagree that complexity is not its own science. I'm not wedded to the 
idea, like the folks you profile in your email, but I think its contribution 
has been small because it's in its infancy. We've been developing reductionist 
tools for hundreds of years now. I think we're in the equivalent of the 
pre-calculus days when it comes to complexity science. 

And we haven't made much progress because the traditional scientific method 
depends on direct causal linkages. On the contrary, complex systems exhibit 
behavior at a global level that is not predictable from the local level... so 
there's a causal relationship only in the weakest sense. It's much more 
straightforward, I think, to say that the two levels, the global and the local, 
are causally orthogonal to one another. Both levels can be described by 
completely independent causal dynamics. 

It's a new science because it's a new method. Isolating variables to determine 
relationships doesn't lend itself well to massively parallel networks that are 
just lousy with feedback, because it's impossible to hold the other values 
still, and worse, the behavior is sensitive to experimental noise. You could 
write a book on the difference between the traditional scientific method and 
the methods for studying complexity. I'm sure it's been done, actually. 

The study of complexity will eventually fulfill its potential as a new science, 
because if we are ever to understand the brain and the mind and model them with 
any real precision, it will be due to complexity science *as much as* 
traditional reductionist science. We need the benefit of both to gain real 
understanding where traditional science has failed.

Our human minds are simply too limited to grasp the enormity of the scale of 
complexity within a single cell, much less a collection of a few trillion of 
them, also arranged in an unfathomably complex arrangement.

The idea that complexity science will *not* figure prominently into the study 
of the body, the brain, and the mind, is an absurd proposition to me. We will 
be going in the right direction when more and more of us are simulating 
something without any clue what the result will be.

That's all for now... thanks for your post Richard.

Terren

--- On Sun, 6/29/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 From: Richard Loosemore [EMAIL PROTECTED]
 Subject: Re: [agi] Approximations of Knowledge
 To: agi@v2.listbox.com
 Date: Sunday, June 29, 2008, 9:23 PM
 Brad Paulsen wrote:
  Richard,
  
  I think I'll get the older Waldrop book now
 because I want to learn more 
  about the ideas surrounding complexity (and, in
 particular, its 
  association with, and differentiation from, chaos
 theory) as soon as 
  possible.  But, I will definitely put an entry in my
 Google calendar to 
  keep a lookout for the new book in 2009.
  
  Thanks very much for the information!
  
  Cheers,
  
  Brad
 
 You're welcome.  I hope it is not a disappointment: 
 the subject is a 
 peculiar one, so I believe that it is better to start off
 with the kind 
 of journalistic overview that Waldrop gives.  Let me know
 what your 
 reaction is.
 
 Here is the bottom line.  At the core of the complex
 systems idea there 
 is something very significant and very powerful, but a lot
 of people 
 have wanted it to lead to a new science just like some of
 the old 
 science.  In other words, they have wanted there to be a
 new, fabulously 
 powerful 'general theory of complexity' coming down
 the road.
 
 However, no such theory is in sight, and there is one view
 of complexity 
 (mine, for example) that says that there will probably
 never be such a 
 theory.  If this were one of the traditional sciences, the
 absence of 
 that kind of progress toward unification would be a sign of
 trouble - a 
 sign that this was not really a new science after all.  Or,
 even worse, 
 a sign that the original idea was bogus.  But I believe
 that is the 
 wrong interpretation to put on it.  The complexity idea is
 very 
 significant, but it is not a science by itself.
 
 Having said all of that, there are many people who so much
 want there to 
 be a science of complexity (enough of a science that there
 could be an 
 institute dedicated to it, where people have real jobs
 working on 
 'complex systems'), that they are prepared to do a
 lot of work that 
 makes it look like something is happening.  So, you can
 find many 
 abstract papers about complex dynamical systems, with
 plenty of 
 mathematics in them.  But as far as I can see, most of that
 stuff is 
 kind of peripheral ... it is something to do to justify a
 research program.
 
 At the end of the day, I think that the *core* complex
 systems idea will 
 outlast all this other stuff, but it will become famous for
 its impact 
 on oter sciences

Re: [agi] Approximations of Knowledge

2008-06-28 Thread Brad Paulsen

Richard,

I presume this is the Waldrop Complexity book to which you referred:

Complexity: The Emerging Science at the Edge of Order and Chaos
M. Mitchell Waldrop, 1992, $10.20 (new, paperback) from Amazon (used
copies also available)
http://www.amazon.com/Complexity-Emerging-Science-Order-Chaos/dp/0671872346/ref=pd_bbs_sr_1?ie=UTF8s=booksqid=1214641304sr=1-1

Is this the newer book you had in mind?

At Home in the Universe: The Search for the Laws of Self-Organization
and Complexity
Stuart Kauffman (The Santa Fe Institute), 1995, $18.95 (new, paperback) 
from Amazon (used copies

also available)
http://www.amazon.com/At-Home-Universe-Self-Organization-Complexity/dp/0195111303/ref=reg_hu-wl_mrai-recs

Cheers,

Brad

Richard Loosemore wrote:

Jim Bromer wrote:



From: Richard Loosemore Jim,

I'm sorry:  I cannot make any sense of what you say here.

I don't think you are understanding the technicalities of the 
argument I am presenting, because your very first sentence... 
But we can invent a 'mathematics' or a program that can is just
 completely false.  In a complex system it is not possible to 
used analytic mathematics to predict the global behavior of the 
system given only the rules that determine the local mechanisms. 
That is the very definition of a complex system (note:  this is a

 complex system in the technical sense of that term, which does
 not mean a complicated system in ordinary language). Richard 
Loosemore


Well lets forget about your theory for a second.  I think that an 
advanced AI program is going to have to be able to deal with 
complexity and that your analysis is certainly interesting and 
illuminating.


But I want to make sure that I understand what you mean here. First
 of all, your statement, it is not possible to use analytic 
mathematics to predict the global behavior of the system given only
 the rules that determine the local mechanisms. By analytic 
mathematics are you referring to numerical analysis, which the 
article in Wikipedia, 
http://en.wikipedia.org/wiki/Numerical_analysis describes as the 
study of algorithms for the problems of continuous mathematics (as
 distinguished from discrete mathematics).  Because if you are 
saying that the study of continuous mathematics -as distinguished 
from discrete mathematics- cannot be used to represent discreet 
system complexity, then that is kind of a non-starter. It's a 
cop-out by initial definition. I am primarily interested in 
discreet programming ( I am, of course also interested in 
continuous systems as well), but in this discussion I was 
expressing my interest in measures that can be taken to simplify 
computational complexity.


Again, Wikipedia gives a slightly more complex definition of 
complexity than you do.  http://en.wikipedia.org/wiki/Complexity I
 am not saying that your particular definition of complexity is 
wrong, I only want to make sure that I understand what it is that 
you are getting at.


The part of your sentence that read, ...given only the rules that
 determine the local mechanisms, sounds like it might well apply
to the kind of system that I think would be necessary for a better
AI program, but it is not necessarily true of all kinds of 
demonstrations of complexity (as I understand them).  For example,
 consider a program that demonstrates the emergence of complex 
behaviors from collections of objects that obey simple rules that 
govern their interactions.  One can use a variety of arbitrary 
settings for the initial state of the program to examine how 
different complex behaviors may emerge in different environments. 
(I am hoping to try something like this when I buy my next computer
 with a great graphics chip in it.)  This means that complexity 
does not have to be represented only in states that had been 
previously generated by the system, as can be obviously seen in the

 fact that initial states are a necessity of such systems.

I think I get what you are saying about complexity in AI and the 
problems of research into AI that could be caused if complexity is

 the reality of advanced AI programming.

But if you are throwing technical arguments at me, some of which 
are trivial from my perspective like the definition of, continuous
 mathematics (as distinguished from discrete mathematics), then 
all I can do is wonder why.


Jim,

With the greatest of respect, this is a topic that will require some
 extensive background reading on your part, because the 
misunderstandings in your above test are too deep for me to remedy in

 the scope of one or two list postings.  For example, my reference to
 analytic mathematics has nothing at all to do with the wikipedia 
entry you found, alas.  The word has many uses, and the one I am 
employing is meant to point up a distinction between classical 
mathematics that allows equations to be solved algebraically, and 
experimental mathematics that solves systems by simulation.  Analytic
 means by analysis in this context...but this is a very 

Re: [agi] Approximations of Knowledge

2008-06-28 Thread Brad Paulsen

Or, maybe...

Complexity: Life at the Edge of Chaos
Roger Lewin, 2000 $10.88 (new, paperback) from Amazon (no used copies)
Complexity: Life at the Edge of Chaos by Roger Lewin (Paperback - Feb 15, 2000)

Brad

Richard Loosemore wrote:

Jim Bromer wrote:



From: Richard Loosemore Jim,

I'm sorry:  I cannot make any sense of what you say here.

I don't think you are understanding the technicalities of the 
argument I am presenting, because your very first sentence... But we 
can invent a 'mathematics' or a program that can is just completely 
false.  In a complex system it is not possible to used analytic 
mathematics to predict the global behavior of the system given only 
the rules that determine the local mechanisms.  That is the very 
definition of a complex system (note:  this is a complex system in 
the technical sense of that term, which does not mean a complicated 
system in ordinary language).

Richard Loosemore


Well lets forget about your theory for a second.  I think that an 
advanced AI program is going to have to be able to deal with 
complexity and that your analysis is certainly interesting and 
illuminating.


But I want to make sure that I understand what you mean here.  First 
of all, your statement, it is not possible to use analytic 
mathematics to predict the global behavior of the system given only 
the rules that determine the local mechanisms.
By analytic mathematics are you referring to numerical analysis, which 
the article in Wikipedia, http://en.wikipedia.org/wiki/Numerical_analysis
describes as the study of algorithms for the problems of continuous 
mathematics (as distinguished from discrete mathematics).  Because if 
you are saying that the study of continuous mathematics -as 
distinguished from discrete mathematics- cannot be used to represent 
discreet system complexity, then that is kind of a non-starter. It's a 
cop-out by initial definition. I am primarily interested in discreet 
programming ( I am, of course also interested in continuous systems as 
well), but in this discussion I was expressing my interest in measures 
that can be taken to simplify computational complexity.


Again, Wikipedia gives a slightly more complex definition of 
complexity than you do.  http://en.wikipedia.org/wiki/Complexity
I am not saying that your particular definition of complexity is 
wrong, I only want to make sure that I understand what it is that you 
are getting at.


The part of your sentence that read, ...given only the rules that 
determine the local mechanisms, sounds like it might well apply to 
the kind of system that I think would be necessary for a better AI 
program, but it is not necessarily true of all kinds of demonstrations 
of complexity (as I understand them).  For example, consider a program 
that demonstrates the emergence of complex behaviors from collections 
of objects that obey simple rules that govern their interactions.  One 
can use a variety of arbitrary settings for the initial state of the 
program to examine how different complex behaviors may emerge in 
different environments.  (I am hoping to try something like this when 
I buy my next computer with a great graphics chip in it.)  This means 
that complexity does not have to be represented only in states that 
had been previously generated by the system, as can be obviously seen 
in the fact that initial states are a necessity of such systems.


I think I get what you are saying about complexity in AI and the 
problems of research into AI that could be caused if complexity is the 
reality of advanced AI programming.


But if you are throwing technical arguments at me, some of which are 
trivial from my perspective like the definition of, continuous 
mathematics (as distinguished from discrete mathematics), then all I 
can do is wonder why.


Jim,

With the greatest of respect, this is a topic that will require some 
extensive background reading on your part, because the misunderstandings 
in your above test are too deep for me to remedy in the scope of one or 
two list postings.  For example, my reference to analytic mathematics 
has nothing at all to do with the wikipedia entry you found, alas.  The 
word has many uses, and the one I am employing is meant to point up a 
distinction between classical mathematics that allows equations to be 
solved algebraically, and experimental mathematics that solves systems 
by simulation.  Analytic means by analysis in this context...but this 
is a very abstract sense of the word that I am talking about here, and 
it is very hard to convey.


This topic is all about 'complex systems' which is a technical term that 
does not mean systems that are complicated (in the everyday sense of 
'complicated').  To get up to speed on this, I recommend a popular 
science book called Complexity by Waldrop, although there was also a 
more recent book whose name I forget, which may be better.  You could 
also read Wolfram's A New Kind of Science, but that is huge and does 
not come 

Re: [agi] Approximations of Knowledge

2008-06-28 Thread Jim Bromer


Richard Loosemore said:
With the greatest of respect, this is a topic that will require some 
extensive background reading on your part, because the misunderstandings 
in your above test are too deep for me to remedy in the scope of one or 
two list postings.  For example, my reference to analytic mathematics 
has nothing at all to do with the wikipedia entry you found, alas.
---
But you did remedy the deep misunderstanding that you saw in my one 
question simply by answering it.

If you ever change your mind and decide someday in the future that you would 
like to discuss this with me please let me know.
Jim Bromer


- Original Message 
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, June 27, 2008 9:13:01 PM
Subject: Re: [agi] Approximations of Knowledge

Jim Bromer wrote:
 
 From: Richard Loosemore Jim,

 I'm sorry:  I cannot make any sense of what you say here.

 I don't think you are understanding the technicalities of the argument I 
 am presenting, because your very first sentence... But we can invent a 
 'mathematics' or a program that can is just completely false.  In a 
 complex system it is not possible to used analytic mathematics to 
 predict the global behavior of the system given only the rules that 
 determine the local mechanisms.  That is the very definition of a 
 complex system (note:  this is a complex system in the technical sense 
 of that term, which does not mean a complicated system in ordinary 
 language).
 Richard Loosemore
 
 Well lets forget about your theory for a second.  I think that an advanced AI 
 program is going to have to be able to deal with complexity and that your 
 analysis is certainly interesting and illuminating.
 
 But I want to make sure that I understand what you mean here.  First of all, 
 your statement, it is not possible to use analytic mathematics to predict 
 the global behavior of the system given only the rules that determine the 
 local mechanisms.
 By analytic mathematics are you referring to numerical analysis, which the 
 article in Wikipedia, 
 http://en.wikipedia.org/wiki/Numerical_analysis
 describes as the study of algorithms for the problems of continuous 
 mathematics (as distinguished from discrete mathematics).  Because if you 
 are saying that the study of continuous mathematics -as distinguished from 
 discrete mathematics- cannot be used to represent discreet system complexity, 
 then that is kind of a non-starter. It's a cop-out by initial definition. I 
 am primarily interested in discreet programming ( I am, of course also 
 interested in continuous systems as well), but in this discussion I was 
 expressing my interest in measures that can be taken to simplify 
 computational complexity.
 
 Again, Wikipedia gives a slightly more complex definition of complexity than 
 you do.  http://en.wikipedia.org/wiki/Complexity
 I am not saying that your particular definition of complexity is wrong, I 
 only want to make sure that I understand what it is that you are getting at.
 
 The part of your sentence that read, ...given only the rules that determine 
 the local mechanisms, sounds like it might well apply to the kind of system 
 that I think would be necessary for a better AI program, but it is not 
 necessarily true of all kinds of demonstrations of complexity (as I 
 understand them).  For example, consider a program that demonstrates the 
 emergence of complex behaviors from collections of objects that obey simple 
 rules that govern their interactions.  One can use a variety of arbitrary 
 settings for the initial state of the program to examine how different 
 complex behaviors may emerge in different environments.  (I am hoping to try 
 something like this when I buy my next computer with a great graphics chip in 
 it.)  This means that complexity does not have to be represented only in 
 states that had been previously generated by the system, as can be obviously 
 seen in the fact that initial states are a necessity of such systems.
 
 I think I get what you are saying about complexity in AI and the problems of 
 research into AI that could be caused if complexity is the reality of 
 advanced AI programming.
 
 But if you are throwing technical arguments at me, some of which are trivial 
 from my perspective like the definition of, continuous mathematics (as 
 distinguished from discrete mathematics), then all I can do is wonder why.

Jim,

With the greatest of respect, this is a topic that will require some 
extensive background reading on your part, because the misunderstandings 
in your above test are too deep for me to remedy in the scope of one or 
two list postings.  For example, my reference to analytic mathematics 
has nothing at all to do with the wikipedia entry you found, alas.  The 
word has many uses, and the one I am employing is meant to point up a 
distinction between classical mathematics that allows equations

Re: [agi] Approximations of Knowledge

2008-06-28 Thread Mike Tintner


Brad:

I presume this is the Waldrop Complexity book to which you referred:

Complexity: The Emerging Science at the Edge of Order and Chaos
M. Mitchell Waldrop, 1992, $10.20 (new, paperback) from Amazon (used
copies also available)
http://www.amazon.com/Complexity-Emerging-Science-Order-Chaos/dp/0671872346/ref=pd_bbs_sr_1?ie=UTF8s=booksqid=1214641304sr=1-1

Is this the newer book you had in mind?

At Home in the Universe: The Search for the Laws of Self-Organization
and Complexity
Stuart Kauffman (The Santa Fe Institute), 1995, $18.95 (new, paperback) 
from Amazon (used copies

also available)
http://www.amazon.com/At-Home-Universe-Self-Organization-Complexity/dp/0195111303/ref=reg_hu-wl_mrai-recs



Speaking of Kauffman, here's a quote from him, illustrating the points I was 
making in the other thread, re how a totally algorithmic approach to AGI - 
including an algorithmic trial-and-error approach -  won't work  (I disagree 
with him though - the mind IS a machine, just much more sophisticated than 
our current conceptions of machines):


The second, predominant view among cognitive scientists is that 
consciousness arises when enough computational elements are networked 
together. In this view, a mind is a machine, and a complex set of buckets of 
water pouring water into one another would become conscious. I just cannot 
believe this. I cannot however disprove it, but I can offer arguments 
against it.
On this view, the mind is algorithmic. With Penrose, in The Emperor's New 
Mind, I believe that the mind is not algorithmic, although it can act 
algorithmically. If it is not algorithmic, then the mind is not a machine 
and consciousness may not arise in a classical - as opposed to possibly to a 
quantum - system. Penrose bases his argument on the claim that in seeking a 
proof a mathematician does not follow an algorithm himself. I think he is 
right, but the example is not felicitous, for the proof itself is patently 
an algorithm, and how do we know that the mathematician did not 
subconsciously follow that algorithm in finding the proof.
My arguments start from humbler conditions. Years ago my computer sat on my 
front table, plugged into a floor socket. I feared my family would bump into 
the cord and pull the computer off the table, breaking it. I now describe 
the table: 3 x 5 feet, three wooden boards on top, legs with certain 
carvings, chipped paint with the wood surface showing through with 
indefinitely many distances between points on the chipped flecks,  two 
cracks, one crack seven feet from the fireplace, eleven feet from the 
kitchen, 238,000 miles from the moon, a broken leaf on the mid board of the 
top...You get the idea that there is no finite description of the table - 
assuming for example continuous spacetime.
So I invented a solution. I jammed the cord into one of the cracks and 
pulled it tight so that my family would not be able to pull the computer off 
the table. Now it seems to me that there is no way to turn this Herculian 
mental performance into an algorithm. How would one bound the features of 
the situation finitely?  How would one even list the features of the table 
in a denumerably infinite list? One cannot.  Thus it seems to me that no 
algorithm was performed. As a broader case, we are all familiar with 
struggling to formulate a problem. Do you remotely think that your struggle 
is an effective mechanical or algorithmic procedure? I do not. I also do 
not know how to prove that a given performance is not algorithmic. What 
would count as such a proof?  So I must leave my conviction with you, 
unproven, but powerful I think. If true, then the mind is not a machine.
Stuart A. Kauffman , BEYOND REDUCTIONISM, Reinventing The Sacred, Edge, 
11.13.06, http://www.edge.org/3rd_culture/kauffman06/kauffman06_index.html






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-28 Thread Richard Loosemore

Brad Paulsen wrote:

Richard,

I presume this is the Waldrop Complexity book to which you referred:

Complexity: The Emerging Science at the Edge of Order and Chaos
M. Mitchell Waldrop, 1992, $10.20 (new, paperback) from Amazon (used
copies also available)
http://www.amazon.com/Complexity-Emerging-Science-Order-Chaos/dp/0671872346/ref=pd_bbs_sr_1?ie=UTF8s=booksqid=1214641304sr=1-1 



Is this the newer book you had in mind?

At Home in the Universe: The Search for the Laws of Self-Organization
and Complexity
Stuart Kauffman (The Santa Fe Institute), 1995, $18.95 (new, paperback) 
from Amazon (used copies

also available)
http://www.amazon.com/At-Home-Universe-Self-Organization-Complexity/dp/0195111303/ref=reg_hu-wl_mrai-recs 


Uh, no:  Kauffman's book is also good, but that was not the one I am 
thinking of.  Trouble is, it had some title that (IIRC) did not directly 
reference the word complex, so after looking at it in the bookstore I 
forgot it.


I think one of the problems with complexity is that only a small chunk 
of it is necessary ... there is a lot of material that, to my mind, does 
not contribute much to the core idea.  And the core idea is not quite 
enough for an entire book.


But, having said that, the core idea is so subtle and so easily 
misunderstood that people trip over it without realizing its 
significance.  Hm.. maybe that means there really should be a 
book length treatment of it after all.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-28 Thread Richard Loosemore

Jim Bromer wrote:


Richard Loosemore said:
With the greatest of respect, this is a topic that will require some 
extensive background reading on your part, because the misunderstandings 
in your above test are too deep for me to remedy in the scope of one or 
two list postings.  For example, my reference to analytic mathematics 
has nothing at all to do with the wikipedia entry you found, alas.

---
But you did remedy the deep misunderstanding that you saw in my one 
question simply by answering it.

If you ever change your mind and decide someday in the future that you would 
like to discuss this with me please let me know.
Jim Bromer


I am happy to discuss it at any time, but it would help if you read 
either the paper I wrote, or my blog posts on the topic, or Waldrop's book.



Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-28 Thread Richard Loosemore

Brad Paulsen wrote:

Or, maybe...

Complexity: Life at the Edge of Chaos
Roger Lewin, 2000 $10.88 (new, paperback) from Amazon (no used copies)
Complexity: Life at the Edge of Chaos by Roger Lewin (Paperback - Feb 
15, 2000)


Nope, not that one either!

Darn.

I think it may have been Simplexity (Kluger), but I am not sure.

Interestingly enough, Melanie Mitchell has a book due out in 2009 called 
The Core Ideas of the Sciences of Complexity.  Interesting title, 
given my thoughts in the last post.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-28 Thread Richard Loosemore

Mike Tintner wrote:


Brad:

I presume this is the Waldrop Complexity book to which you referred:

Complexity: The Emerging Science at the Edge of Order and Chaos
M. Mitchell Waldrop, 1992, $10.20 (new, paperback) from Amazon (used
copies also available)
http://www.amazon.com/Complexity-Emerging-Science-Order-Chaos/dp/0671872346/ref=pd_bbs_sr_1?ie=UTF8s=booksqid=1214641304sr=1-1 



Is this the newer book you had in mind?

At Home in the Universe: The Search for the Laws of Self-Organization
and Complexity
Stuart Kauffman (The Santa Fe Institute), 1995, $18.95 (new, 
paperback) from Amazon (used copies

also available)
http://www.amazon.com/At-Home-Universe-Self-Organization-Complexity/dp/0195111303/ref=reg_hu-wl_mrai-recs 





Speaking of Kauffman, here's a quote from him, illustrating the points I 
was making in the other thread, re how a totally algorithmic approach to 
AGI - including an algorithmic trial-and-error approach -  won't work  
(I disagree with him though - the mind IS a machine, just much more 
sophisticated than our current conceptions of machines):


The second, predominant view among cognitive scientists is that 
consciousness arises when enough computational elements are networked 
together. In this view, a mind is a machine, and a complex set of 
buckets of water pouring water into one another would become conscious. 
I just cannot believe this. I cannot however disprove it, but I can 
offer arguments against it.
On this view, the mind is algorithmic. With Penrose, in The Emperor's 
New Mind, I believe that the mind is not algorithmic, although it can 
act algorithmically. If it is not algorithmic, then the mind is not a 
machine and consciousness may not arise in a classical - as opposed to 
possibly to a quantum - system. Penrose bases his argument on the claim 
that in seeking a proof a mathematician does not follow an algorithm 
himself. I think he is right, but the example is not felicitous, for the 
proof itself is patently an algorithm, and how do we know that the 
mathematician did not subconsciously follow that algorithm in finding 
the proof.
My arguments start from humbler conditions. Years ago my computer sat on 
my front table, plugged into a floor socket. I feared my family would 
bump into the cord and pull the computer off the table, breaking it. I 
now describe the table: 3 x 5 feet, three wooden boards on top, legs 
with certain carvings, chipped paint with the wood surface showing 
through with indefinitely many distances between points on the chipped 
flecks,  two cracks, one crack seven feet from the fireplace, eleven 
feet from the kitchen, 238,000 miles from the moon, a broken leaf on the 
mid board of the top...You get the idea that there is no finite 
description of the table - assuming for example continuous spacetime.
So I invented a solution. I jammed the cord into one of the cracks and 
pulled it tight so that my family would not be able to pull the computer 
off the table. Now it seems to me that there is no way to turn this 
Herculian mental performance into an algorithm. How would one bound the 
features of the situation finitely?  How would one even list the 
features of the table in a denumerably infinite list? One cannot.  Thus 
it seems to me that no algorithm was performed. As a broader case, we 
are all familiar with struggling to formulate a problem. Do you remotely 
think that your struggle is an effective mechanical or algorithmic 
procedure? I do not. I also do not know how to prove that a given 
performance is not algorithmic. What would count as such a proof?  So I 
must leave my conviction with you, unproven, but powerful I think. If 
true, then the mind is not a machine.
Stuart A. Kauffman , BEYOND REDUCTIONISM, Reinventing The Sacred, Edge, 
11.13.06, http://www.edge.org/3rd_culture/kauffman06/kauffman06_index.html


What Kauffman is talking about here is the Frame Problem.  Anyone who 
has gone through a standard AI/Cognitive Science training should 
recognize that.


But now here is the trouble with this argument.  What does he mean by 
saying that the mind is not 'algorithmic'?  He uses the keyphrase 
'effective procedure' when trying to describe this, but that is a loaded 
techical term


What he means by 'algorithm' in this context is what some of us would 
call the rigid manipulation of simple, hard-edged symbols, using metods 
that have explicit semantics.


BUT if you go outside that interpretation of 'algorithm' and include 
mechanisms that work by a process of dynamic, stochastic relaxation, it 
is easy in principle to see how this issue (the Frame Problem) could be 
solved.  Or rather, it becomes difficult to see that a problem actually 
exists at all.


The trouble is, that many of us would say that dynamic relaxation is 
just as algorithmic as anything else.  It just does not involve symbols 
 and mechanisms closed-form, explicit semantics.  There is no big 
mystery here, no destruction of the Computational Paradigm.  

Re: [agi] Approximations of Knowledge

2008-06-28 Thread Brad Paulsen

Richard,

I think I'll get the older Waldrop book now because I want to learn more about 
the ideas surrounding complexity (and, in particular, its association with, and 
differentiation from, chaos theory) as soon as possible.  But, I will definitely 
put an entry in my Google calendar to keep a lookout for the new book in 2009.


Thanks very much for the information!

Cheers,

Brad


Richard Loosemore wrote:

Brad Paulsen wrote:

Or, maybe...

Complexity: Life at the Edge of Chaos
Roger Lewin, 2000 $10.88 (new, paperback) from Amazon (no used copies)
Complexity: Life at the Edge of Chaos by Roger Lewin (Paperback - Feb 
15, 2000)


Nope, not that one either!

Darn.

I think it may have been Simplexity (Kluger), but I am not sure.

Interestingly enough, Melanie Mitchell has a book due out in 2009 called 
The Core Ideas of the Sciences of Complexity.  Interesting title, 
given my thoughts in the last post.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-28 Thread wannabe

Richard wrote:

Interestingly enough, Melanie Mitchell has a book due out in 2009
called The Core Ideas of the Sciences of Complexity.  Interesting
title, given my thoughts in the last post.


Thanks for the tip, Richard!  I like her book on CopyCat, and I'd  
heard she had been doing complexity stuff.  I will look for that.  I  
looked at the complexity stuff when it was first coming out.  As far  
as I can remember, not much has really come out of it, but it will be  
nice to hear what she has to say.


andi



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-27 Thread Richard Loosemore

Abram Demski wrote:

Ah, so you do not accept AIXI either.


Goodness me, no ;-).  As far as I am concerned, AIXI is a mathematical 
formalism with loaded words like 'intelligence' attached to it, and then 
the formalism is taken as being about the real things in the world (i.e. 
intelligent systems) that those words normally signify.





Put this way, your complex system dilemma applies only to pure AGI,
and not to any narrow AI attempts, no matter how ambitious. But I
suppose other, totally different reasons (such as P != NP, if so) can
block those.

Is this the best way to understand your argument? Meaning, is the key
idea intelligence is a complex global property, so we can't define
it? If so, my original blog post is way of. My interpretation was
more like intelligence is a complex global property, so we can't
predict its occurring based on local properties. These are two very
different arguments. Perhaps you are arguing both points?


My feeling is that it is a mixture of the two.  My main concern is not 
to *assert* that intelligence is a complex global property, but to ask 
Is there a risk that intelligence is a complex global property? and 
then to follow that with a second question, namely If it is complex, 
then what impact would this have on the methodology of AGI?.


The answers that I tried to bring out in that paper were that (1) there 
is a substantial risk that all intelligent systems must be at least 
partially complex (reason:  nobody seems to know how to build a complete 
intelligence without including a substantial dose of the kind of tangled 
mechanisms that almost always give rise to complexity), and (2) the 
impact on AGI methodology is potentially devastating, and (disturbingly) 
so subtle that it would be possible for a skeptic to deny it forever.


The impact would be devastating because the current approach to AI, if 
applied to a situation in which the target was a complex system, would 
just run around in circles forever, always building systems that were 
kind of smart, but which did not scale up to the real thing, or which 
could only work if we hand-craft every piece of knowledge that the 
system uses, and so on.  In fact, the predicted progress rate in AI 
research would show exactly the type of pattern that has existed for the 
last fifty years.  As I said in another response to someone recently, 
all of the progress that has been made is essentially a result of AI 
researchers implictly using their own intuitions about how their minds 
work, while at the same time (mostly) denying that they are doing this.


So, going back to your question.  I do think that if intelligence is a 
(partially) complex global property, then it cannot be defined in a way 
that allows us to go from a definition to a prescription for a mechanism 
(i.e., we cannot simply set it up as an optimization problem).  That is 
not the direct purpose of my argument, but it is corollary.  Your second 
point is closer to the goal of my argument, but I would rephrase it to 
say that getting a real intelligence (an AGI) to work probably will 
require at least part of the system to have a disconnected relationship 
between global and local, so in that sense we would not be able to 
'predict' the occurence of intelligence based on local properties.


Remember the bottom line.  My only goal is to ask how different 
methodologies would fare if intelligence is complex.





Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-27 Thread Jim Bromer


 From: Richard Loosemore Jim,
 
 I'm sorry:  I cannot make any sense of what you say here.
 
 I don't think you are understanding the technicalities of the argument I 
 am presenting, because your very first sentence... But we can invent a 
 'mathematics' or a program that can is just completely false.  In a 
 complex system it is not possible to used analytic mathematics to 
 predict the global behavior of the system given only the rules that 
 determine the local mechanisms.  That is the very definition of a 
 complex system (note:  this is a complex system in the technical sense 
 of that term, which does not mean a complicated system in ordinary 
 language).
 Richard Loosemore

Well lets forget about your theory for a second.  I think that an advanced AI 
program is going to have to be able to deal with complexity and that your 
analysis is certainly interesting and illuminating.

But I want to make sure that I understand what you mean here.  First of all, 
your statement, it is not possible to use analytic mathematics to predict the 
global behavior of the system given only the rules that determine the local 
mechanisms.
By analytic mathematics are you referring to numerical analysis, which the 
article in Wikipedia, 
http://en.wikipedia.org/wiki/Numerical_analysis
describes as the study of algorithms for the problems of continuous 
mathematics (as distinguished from discrete mathematics).  Because if you are 
saying that the study of continuous mathematics -as distinguished from discrete 
mathematics- cannot be used to represent discreet system complexity, then that 
is kind of a non-starter. It's a cop-out by initial definition. I am primarily 
interested in discreet programming ( I am, of course also interested in 
continuous systems as well), but in this discussion I was expressing my 
interest in measures that can be taken to simplify computational complexity.

Again, Wikipedia gives a slightly more complex definition of complexity than 
you do.  http://en.wikipedia.org/wiki/Complexity
I am not saying that your particular definition of complexity is wrong, I only 
want to make sure that I understand what it is that you are getting at.

The part of your sentence that read, ...given only the rules that determine 
the local mechanisms, sounds like it might well apply to the kind of system 
that I think would be necessary for a better AI program, but it is not 
necessarily true of all kinds of demonstrations of complexity (as I understand 
them).  For example, consider a program that demonstrates the emergence of 
complex behaviors from collections of objects that obey simple rules that 
govern their interactions.  One can use a variety of arbitrary settings for the 
initial state of the program to examine how different complex behaviors may 
emerge in different environments.  (I am hoping to try something like this when 
I buy my next computer with a great graphics chip in it.)  This means that 
complexity does not have to be represented only in states that had been 
previously generated by the system, as can be obviously seen in the fact that 
initial states are a necessity of such systems.

I think I get what you are saying about complexity in AI and the problems of 
research into AI that could be caused if complexity is the reality of advanced 
AI programming.

But if you are throwing technical arguments at me, some of which are trivial 
from my perspective like the definition of, continuous mathematics (as 
distinguished from discrete mathematics), then all I can do is wonder why.

Jim Bromer


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-27 Thread Richard Loosemore

Jim Bromer wrote:



From: Richard Loosemore Jim,

I'm sorry:  I cannot make any sense of what you say here.

I don't think you are understanding the technicalities of the argument I 
am presenting, because your very first sentence... But we can invent a 
'mathematics' or a program that can is just completely false.  In a 
complex system it is not possible to used analytic mathematics to 
predict the global behavior of the system given only the rules that 
determine the local mechanisms.  That is the very definition of a 
complex system (note:  this is a complex system in the technical sense 
of that term, which does not mean a complicated system in ordinary 
language).

Richard Loosemore


Well lets forget about your theory for a second.  I think that an advanced AI 
program is going to have to be able to deal with complexity and that your 
analysis is certainly interesting and illuminating.

But I want to make sure that I understand what you mean here.  First of all, your 
statement, it is not possible to use analytic mathematics to predict the global 
behavior of the system given only the rules that determine the local mechanisms.
By analytic mathematics are you referring to numerical analysis, which the article in Wikipedia, 
http://en.wikipedia.org/wiki/Numerical_analysis

describes as the study of algorithms for the problems of continuous mathematics (as 
distinguished from discrete mathematics).  Because if you are saying that the study 
of continuous mathematics -as distinguished from discrete mathematics- cannot be used to 
represent discreet system complexity, then that is kind of a non-starter. It's a cop-out 
by initial definition. I am primarily interested in discreet programming ( I am, of 
course also interested in continuous systems as well), but in this discussion I was 
expressing my interest in measures that can be taken to simplify computational complexity.

Again, Wikipedia gives a slightly more complex definition of complexity than 
you do.  http://en.wikipedia.org/wiki/Complexity
I am not saying that your particular definition of complexity is wrong, I only 
want to make sure that I understand what it is that you are getting at.

The part of your sentence that read, ...given only the rules that determine the 
local mechanisms, sounds like it might well apply to the kind of system that I 
think would be necessary for a better AI program, but it is not necessarily true of all 
kinds of demonstrations of complexity (as I understand them).  For example, consider a 
program that demonstrates the emergence of complex behaviors from collections of objects 
that obey simple rules that govern their interactions.  One can use a variety of 
arbitrary settings for the initial state of the program to examine how different complex 
behaviors may emerge in different environments.  (I am hoping to try something like this 
when I buy my next computer with a great graphics chip in it.)  This means that 
complexity does not have to be represented only in states that had been previously 
generated by the system, as can be obviously seen in the fact that initial states are a 
necessity of such systems.

I think I get what you are saying about complexity in AI and the problems of 
research into AI that could be caused if complexity is the reality of advanced 
AI programming.

But if you are throwing technical arguments at me, some of which are trivial from my 
perspective like the definition of, continuous mathematics (as distinguished from 
discrete mathematics), then all I can do is wonder why.


Jim,

With the greatest of respect, this is a topic that will require some 
extensive background reading on your part, because the misunderstandings 
in your above test are too deep for me to remedy in the scope of one or 
two list postings.  For example, my reference to analytic mathematics 
has nothing at all to do with the wikipedia entry you found, alas.  The 
word has many uses, and the one I am employing is meant to point up a 
distinction between classical mathematics that allows equations to be 
solved algebraically, and experimental mathematics that solves systems 
by simulation.  Analytic means by analysis in this context...but this 
is a very abstract sense of the word that I am talking about here, and 
it is very hard to convey.


This topic is all about 'complex systems' which is a technical term that 
does not mean systems that are complicated (in the everyday sense of 
'complicated').  To get up to speed on this, I recommend a popular 
science book called Complexity by Waldrop, although there was also a 
more recent book whose name I forget, which may be better.  You could 
also read Wolfram's A New Kind of Science, but that is huge and does 
not come to the simple point very easily.


I am happy to make an attempt to bridge the gap by answering questions, 
but you must begin with the understanding that this would be a dialog 
between someone who has been doing research in a field for over 25 

Re: [agi] Approximations of Knowledge

2008-06-26 Thread Richard Loosemore

Jim Bromer wrote:



- Original Message 
From: Richard Loosemore Jim,

I'm sorry:  I cannot make any sense of what you say here.

I don't think you are understanding the technicalities of the argument I 
am presenting, because your very first sentence... But we can invent a 
'mathematics' or a program that can is just completely false.  In a 
complex system it is not possible to used analytic mathematics to 
predict the global behavior of the system given only the rules that 
determine the local mechanisms.  That is the very definition of a 
complex system (note:  this is a complex system in the technical sense 
of that term, which does not mean a complicated system in ordinary 
language).

Richard Loosemore
--
I don't feel that you are seriously interested in discussing the subject with 
me.  Let me know if you ever change your mind.


No, I am seriously interested in discussing the subject with you:  I 
just explained a problem with the statement you made.  If I was not 
interested in discussing, I would not have gone to that trouble.


I suspect you are offended by my comment that I cannot make sense of 
what you say.  This is just my honest reaction to what you wrote.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-26 Thread Abram Demski
Ah, so you do not accept AIXI either.

Put this way, your complex system dilemma applies only to pure AGI,
and not to any narrow AI attempts, no matter how ambitious. But I
suppose other, totally different reasons (such as P != NP, if so) can
block those.

Is this the best way to understand your argument? Meaning, is the key
idea intelligence is a complex global property, so we can't define
it? If so, my original blog post is way of. My interpretation was
more like intelligence is a complex global property, so we can't
predict its occurring based on local properties. These are two very
different arguments. Perhaps you are arguing both points?

On Wed, Jun 25, 2008 at 6:20 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
[..]
 The confusion in our discussion has to do with the assumption you listed
 above:  ...I am implicitly assuming that we have some exact definition of
 intelligence, so that we know what we are looking for...

 This is precisely what we do not have, and which we will quite possibly
 never have.
[..]
 Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-25 Thread Jim Bromer
Loosemore said:
But now ... suppose, ... that there do not exist ANY 3-sex cellular 
automata in which there are emergent patterns equivalent to the glider 
and glider gun.  ...Conway ... can search through the entire space of 
3-sex automata..., and he will never build a  system that satisfies his 
requirement.

This is the boxed-in corner that I am talking about.  We decide that 
intelligence must be built with some choice of logical formalism, plus 
heuristics, and we assume that we can always keep jiggling the 
heuristics until the system as a whole shows a significant degree of 
intelligence.  But there is nothing in the world that says that this is 
possible.

...mathematics cannot possibly tell you that this part of the space does 
not contain any solutions.  That is the whole point of complex systems, 
n'est pas?  No analysis will let you know what the global properties are 
without doing a brute force exploration of (simulations of) the system.


But we can invent a 'mathematics' or a program that can.  By understanding that 
a model is not perfect, and recognizing that references may not mesh perfectly, 
a program can imagine other possibilities and these possibilities can be based 
on complex interrelations built between feasible strands.  Approximations do 
not need to be limited to weighted expressions, general vagueness or something 
like that.  From this point it is just a matter of devising a 'mathematical' - 
a programmed - system to discover actual feasibilities.  The Game of Life did 
not solve the contemporary problem of AI because it was biased to create a 
chain of progression and it wasted the memory of those results that did not 
immediately result in a payoff but may have fit into other developments.  And 
it did not explore the relative reduction space.  The reconciliation between 
the study of possible splices of previously seen chains of products and 
empirical feasibility may be an open
 ended process but it could be governed by a program.  It may be AI-complete 
but the sub tasks to run a search from imaginative feasibility to empirical 
feasibility can be governed by logic (even though it would be open ended 
AI-complete search.)  

I agree with what you are saying in the broader sense, but I do believe that 
the research problem could be governed by a logical system, although it would 
require a great many resources to search the Cantorian diagonal infinities 
space of possible arrangements of relative reductions.  Relative reduction 
means that in order to discover the nature of certain mathematical problems we 
may (usually) have to use reductionism to discover all of the salient features 
that would be necessary to create a mathematical algorithm to produce the range 
of desired outputs.  But the system of reductionist methods has to be relative 
to the features of the system; a set of elements cannot be taken for granted, 
you have to discover the pseudo-elements (or relative elements) of the system 
relative to the features of the problem.

Jim Bromer





- Original Message 
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, June 24, 2008 9:02:31 PM
Subject: Re: [agi] Approximations of Knowledge

Abram Demski wrote:
 I'm still not really satisfied, though, because I would personally
 stop at the stage when the heuristic started to get messy, and say,
 The problem is starting to become AI-complete, so at this point I
 should include a meta-level search to find a good heuristic for me,
 rather than trying to hard-code one...
 And at that point, your lab and my lab are essentially starting to do
 the same thing.  You need to start searching the space of possible
 heuristics in a systematic way, rather than just pick a hunch and go
 with it.

 The problem, though, is that you might already have gotten yourself into
 a You Can't Get There By Starting From Here situation.  Suppose your
 choice of basic logical formalism, and knowledge representation format
 (and the knowledge acquisition methods that MUST come along with that
 formalism) has boxed you into a corner in which there does not exist any
 choice of heuristic control mechanism that will get your system up into
 human-level intelligence territory?
 
 If the underlying search space was sufficiently general, we are OK,
 there is no way to get boxed in except by the heuristic.

Wait:  we are not talking about the same thing here.

Analogous situation.  Imagine that John Horton Conway is trying to 
invent a cellular automaton with particular characteristics - say, he 
has already decided that the basic rules MUST show the global 
characteristic of having a thing like a glider and a thing like a glider 
gun.  (This is equivalent to us saying that we want to build a system 
that has the particular characteristics that we colloquially call 
'intelligence', and we will do it with a system that is complex).

But now

Re: [agi] Approximations of Knowledge

2008-06-25 Thread Richard Loosemore

Jim Bromer wrote:

Loosemore said: But now ... suppose, ... that there do not exist ANY
3-sex cellular automata in which there are emergent patterns
equivalent to the glider and glider gun.  ...Conway ... can search
through the entire space of 3-sex automata..., and he will never
build a  system that satisfies his requirement.

This is the boxed-in corner that I am talking about.  We decide that
 intelligence must be built with some choice of logical formalism,
plus heuristics, and we assume that we can always keep jiggling the 
heuristics until the system as a whole shows a significant degree of

 intelligence.  But there is nothing in the world that says that this
is possible.

...mathematics cannot possibly tell you that this part of the space
does not contain any solutions.  That is the whole point of complex
systems, n'est pas?  No analysis will let you know what the global
properties are without doing a brute force exploration of
(simulations of) the system.





But we can invent a 'mathematics' or a program that can. By
understanding that a model is not perfect, and recognizing that
references may not mesh perfectly, a program can imagine other
possibilities and these possibilities can be based on complex
interrelations built between feasible strands. Approximations do not
need to be limited to weighted expressions, general vagueness or
something like that. From this point it is just a matter of devising a
'mathematical' - a programmed - system to discover actual feasibilities.
The Game of Life did not solve the contemporary problem of AI because it
was biased to create a chain of progression and it wasted the memory of
those results that did not immediately result in a payoff but may have
fit into other developments. And it did not explore the relative
reduction space. The reconciliation between the study of possible
splices of previously seen chains of products and empirical feasibility
may be an open ended process but it could be governed by a program. It
may be AI-complete but the sub tasks to run a search from imaginative
feasibility to empirical feasibility can be governed by logic (even
though it would be open ended AI-complete search.) 



I agree with what you are saying in the broader sense, but I do believe
that the research problem could be governed by a logical system,
although it would require a great many resources to search the Cantorian
diagonal infinities space of possible arrangements of relative
reductions. Relative reduction means that in order to discover the
nature of certain mathematical problems we may (usually) have to use
reductionism to discover all of the salient features that would be
necessary to create a mathematical algorithm to produce the range of
desired outputs. But the system of reductionist methods has to be
relative to the features of the system; a set of elements cannot be
taken for granted, you have to discover the pseudo-elements (or relative
elements) of the system relative to the features of the problem.


Jim,

I'm sorry:  I cannot make any sense of what you say here.

I don't think you are understanding the technicalities of the argument I 
am presenting, because your very first sentence... But we can invent a 
'mathematics' or a program that can is just completely false.  In a 
complex system it is not possible to used analytic mathematics to 
predict the global behavior of the system given only the rules that 
determine the local mechanisms.  That is the very definition of a 
complex system (note:  this is a complex system in the technical sense 
of that term, which does not mean a complicated system in ordinary 
language).




Richard Loosemore







---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-25 Thread Abram Demski
It seems as if we are beginning to talk past eachother. I think the
problem may be that we have different implicit conceptions of the sort
of AI being constructed. My implicit conception is that of an
optimization problem. The AI is given the challenge of formulating the
best response to its input that it can muster within real-world time
constraints. This in some sense always a search problem; it just might
be all heuristic, so that it doesn't look much like a search. In
designing an AI, I am implicitly assuming that we have some exact
definition of intelligence, so that we know what we are looking for.
This makes the optimization problem well-defined: the search space is
that of all possible responses to the input, and the utility function
is our definition of intelligence. *Our* problem is to find (1)
efficient optimal search strategies, and where that fails, (2) good
heuristics.

I'll admit that the general Conway analogy applies, because we are
looking for heuristics with the property of giving good answers most
of the time, and the math is sufficiently complicated as to be
intractable in most cases. But your more recent variation, where
Conway goes amiss, does not seem to be analogous?

On Tue, Jun 24, 2008 at 9:02 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Abram Demski wrote:

 I'm still not really satisfied, though, because I would personally
 stop at the stage when the heuristic started to get messy, and say,
 The problem is starting to become AI-complete, so at this point I
 should include a meta-level search to find a good heuristic for me,
 rather than trying to hard-code one...

 And at that point, your lab and my lab are essentially starting to do
 the same thing.  You need to start searching the space of possible
 heuristics in a systematic way, rather than just pick a hunch and go
 with it.

 The problem, though, is that you might already have gotten yourself into
 a You Can't Get There By Starting From Here situation.  Suppose your
 choice of basic logical formalism, and knowledge representation format
 (and the knowledge acquisition methods that MUST come along with that
 formalism) has boxed you into a corner in which there does not exist any
 choice of heuristic control mechanism that will get your system up into
 human-level intelligence territory?

 If the underlying search space was sufficiently general, we are OK,
 there is no way to get boxed in except by the heuristic.

 Wait:  we are not talking about the same thing here.

 Analogous situation.  Imagine that John Horton Conway is trying to invent a
 cellular automaton with particular characteristics - say, he has already
 decided that the basic rules MUST show the global characteristic of having a
 thing like a glider and a thing like a glider gun.  (This is equivalent to
 us saying that we want to build a system that has the particular
 characteristics that we colloquially call 'intelligence', and we will do it
 with a system that is complex).

 But now Conway boxes himself into a corner:  he decides, a priori, that the
 cellular automaton MUST have three sexes, instead of the two sexes that we
 are familiar with in Game of Life.  So three states for every cell.  But now
 (we will suppose, for the sake of the argument), it just happens to be the
 case that there do not exist ANY 3-sex cellular automata in which there are
 emergent patterns equivalent to the glider and glider gun.  Now, alas,
 Conway is up poop creek without an instrument of propulsion - he can search
 through the entire space of 3-sex automata until the end of the universe,
 and he will never build a system that satisfies his requirement.

 This is the boxed-in corner that I am talking about.  We decide that
 intelligence must be built with some choice of logical formalism, plus
 heuristics, and we assume that we can always keep jiggling the heuristics
 until the system as a whole shows a significant degree of intelligence.  But
 there is nothing in the world that says that this is possible.  We could be
 in exactly the same system as our hypothetical Conway, trying to find a
 solution in a part of the space of all possible systems in which there do
 not exist any solutions.

 The real killer is that, unlike the example you mention below, mathematics
 cannot possibly tell you that this part of the space does not contain any
 solutions.  That is the whole point of complex systems, n'est pas?  No
 analysis will let you know what the global properties are without doing a
 brute force exploration of (simulations of) the system.


 Richard Loosemore



 This is what the mathematics is good for. An experiment, I think, will
 not tell you this, since a formalism can cover almost everything but
 not everything. For example, is a given notation for functions
 Turing-complete, or merely primitive recursive? Primitive recursion is
 amazingly expressive, so I think it would be easy to be fooled. But a
 proof of Turing-completeness will suffice.





 

RE: [agi] Approximations of Knowledge

2008-06-25 Thread Derek Zahn
Richard,
 
If I can make a guess at where Jim is coming from:
 
Clearly, intelligent systems CAN be produced.  Assuming we can define 
intelligent system well enough to recognize it, we can generate systems at 
random until one is found.  That is impractical, however.  So, we can look at 
the problem as one of search optimization.  Evolution produced intelligent 
systems through a biased search, for example, so it is at least possible to 
improve search over completely random generate and test.
 
What other ways can be used to speed up search?  Jim is suggesting some methods 
that he believes may help.  If I understand what you've said about your 
approach, you have some very different methods than what he is proposing to 
focus the search.  I do not understand exactly what Jim is proposing; 
presumably he is aiming to use his SAT solver to guide the search toward areas 
that contain partial solutions or promising partial models of some sort.
 
It seems to me very difficult to define the goal formally, very difficult to 
develop a meta system in which a sufficiently broad class of candidate systems 
can be expressed, and very difficult to describe the splices or reductions 
or partial models in such a way to smooth the fitness landscape and thus speed 
up search.  So I don't know how practical such a plan is.
 
But (again assuming I understand Jim's approach) it avoids your complex system 
arguments because it is not making any effort to predict global behavior from 
the low-level system components, it's just searching through possibilities. 
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-25 Thread Steve Richfield
Jim,

On 6/24/08, Jim Bromer [EMAIL PROTECTED] wrote:

  Although I do suffer from an assortment of biases, I would not get
 worried to see any black man walking behind me at night.  For example, if I
 saw Andrew Young or Bill Cosby walking behind me I don't think I would be
 too worried.


However, you would have to look very carefully to identify these people with
confidence. Why would you bother to look so carefully? Obviously, because of
some sense of alarm.

  Or, if I was walking out of a campus library and a young black man
 carrying some books was walking behind me,


Again, you would have to look carefully enough to verify age, and that the
books weren't bound together with a belt or rope so they could be used as a
weapon. Again, why would you bother to look so carefully? Obviously again,
because of some sense of alarm.

  I would not be too worried about that either.


OK, so you have eliminated ~1% of the cases. How about the other 99% of the
cases?

  Your statement was way over the line, and it showed some really bad
 judgment.


Apparently you don't follow the news very well. My statement was
an obvious paraphrase from a fairly recent statement made by Rev Jesse
Jackson, who says that HE gets worried when a black man is walking behind
him. Perhaps I should have attributed my statement for those who
don't follow the news. I think that if he gets worried, that the rest of us
should also pay some attention.

However, your comment broadly dismissing what I said (reason for possible
alarm) based on some narrow possible exceptions (which would only be
carefully verified *BECAUSE* of such alarm) does indeed show that your
thinking is quite clouded and wound around the axle of PC (Political
Correctness), and hence we shouldn't be expecting any new ideas from you
anytime soon.

The message here that you will probably still completely miss, but which
hopefully other readers here will get, is that even bright people like you
are UNABLE to program AGIs, or to state non-dangerous goals, or even to
recognize obvious dangers. The whole concept of human guidance is SO deeply
flawed that I see no hope of it ever working in any useful way. Not in this
century or the next.

Again, for the umpteenth time, and ANYONE here bothered yet to read the REST
of the Colossus trilogy that started with *The Forbin Project* movie? If we
are going to rehash issues that have already been written about, it would
sure be nice to fast-forward over past writings.

Steve Richfield
=

   - Original Message 
 From: Steve Richfield [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Monday, June 23, 2008 10:53:07 PM
 Subject: Re: [agi] Approximations of Knowledge

 Andy,

 This is a PERFECT post, because it so perfectly illustrates a particular
 point of detachment from reality that is common among AGIers. In the real
 world we do certain things to achieve a good result, but when we design
 politically correct AGIs, we banish the very logic that allows us to
 function. For example, if you see a black man walking behind you at night,
 you rightly worry, but if you include that in your AGI design, you would be
 dismissed as a racist.

 Effectively solving VERY VERY difficult problems, like why a particular
 corporation is failing after other experts have failed, is a multiple-step
 process that starts with narrowing down the vast field of possibilities. As
 others have already pointed out here, this is often done in a rather summary
 and non-probabilistic way. Perhaps all of the really successful programmers
 that you have known have had long hair, so if the programming is failing and
 the programmer has short hair, then maybe there is an attitude issue to look
 into. Of course this does NOT necessarily mean that there is any linkage at
 all - just another of many points to focus some attention to.

 Similarly, over the course of 100 projects I have developed a long list of
 rules that help me find the problems with a tractable amount of effort.
 No, I don't usually tell others my poorly-formed rules because they prove
 absolutely NOTHING, only focus further effort. I have a special assortment
 of rules to apply whenever God is mentioned. After all, not everyone thinks
 that God has the same motivations, so SOME approach is needed to paradigm
 shift one person's statements to be able to be understood by another
 person. The posting you responded to was expressing one such rule. That
 having been said...

 On 6/22/08, J. Andrew Rogers [EMAIL PROTECTED] wrote:


 Somewhere in the world, there is a PhD chemist and a born-again Christian
 on another mailing list ...the project had hit a serious snag, and so the
 investors brought in a consultant that would explain why the project was
 broken by defectively reasoning about dubious generalizations he pulled out
 of his ass...


 Of course I don't make any such (I freely admit to dubious) generalizations
 to investors. However, I immediately drill down to find out

Re: [agi] Approximations of Knowledge

2008-06-25 Thread Richard Loosemore

Abram Demski wrote:

It seems as if we are beginning to talk past eachother. I think the
problem may be that we have different implicit conceptions of the sort
of AI being constructed. My implicit conception is that of an
optimization problem. The AI is given the challenge of formulating the
best response to its input that it can muster within real-world time
constraints. This in some sense always a search problem; it just might
be all heuristic, so that it doesn't look much like a search. In
designing an AI, I am implicitly assuming that we have some exact
definition of intelligence, so that we know what we are looking for.
This makes the optimization problem well-defined: the search space is
that of all possible responses to the input, and the utility function
is our definition of intelligence. *Our* problem is to find (1)
efficient optimal search strategies, and where that fails, (2) good
heuristics.

I'll admit that the general Conway analogy applies, because we are
looking for heuristics with the property of giving good answers most
of the time, and the math is sufficiently complicated as to be
intractable in most cases. But your more recent variation, where
Conway goes amiss, does not seem to be analogous?


The confusion in our discussion has to do with the assumption you listed 
above:  ...I am implicitly assuming that we have some exact definition 
of intelligence, so that we know what we are looking for...


This is precisely what we do not have, and which we will quite possibly 
never have.


The reason?  If existing intelligent systems are complex systems, then 
when we look at one of them and say That is my example of what is meant 
by 'intelligence', we are pointing at a global property of a complex 
system.  If anyone thinks that the intelligence of existing intelligent 
systems is completely independent of all complex global properties of 
the system, the ball is in their court:  they must somehow show good 
reason for us to believe that this is the case - and so far in the 
history of philosophy, psychology and AI, nobody has ever come close to 
showing such a thing.  In other words, nobody can give a non-circular, 
practical definition that is demonstrably identical to the definition of 
intelligence in natural systems.  All the evidence (the tangled nature 
of the mechanisms that appear to be necessary to build an intelligence) 
points to the fact that intelligence is likely to be a complex global 
property.


Now, if intelligence *is* a global property of a complex system, it will 
not be possible to simply write down a clear definition of it and then 
optimize.  That is the point of the Conway analogy:  we would be in the 
same boat that he was.


So, in a way, when you wrote down that assumption, what you did was 
iimplictly assert that human level intelligence can definitely be 
achieved without needing to do it with a system that is complex.  That 
is an extremely strong assertion, and unfortunately there is no evidence 
(except the intuition of some people) that this is a valid assumption. 
Quite the contrary, all the evidence appears to point the other way.


So that one statement is really the crunch point.  All the rest is 
downhill from that point on.



Richard Loosemore





On Tue, Jun 24, 2008 at 9:02 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

Abram Demski wrote:

I'm still not really satisfied, though, because I would personally
stop at the stage when the heuristic started to get messy, and say,
The problem is starting to become AI-complete, so at this point I
should include a meta-level search to find a good heuristic for me,
rather than trying to hard-code one...

And at that point, your lab and my lab are essentially starting to do
the same thing.  You need to start searching the space of possible
heuristics in a systematic way, rather than just pick a hunch and go
with it.

The problem, though, is that you might already have gotten yourself into
a You Can't Get There By Starting From Here situation.  Suppose your
choice of basic logical formalism, and knowledge representation format
(and the knowledge acquisition methods that MUST come along with that
formalism) has boxed you into a corner in which there does not exist any
choice of heuristic control mechanism that will get your system up into
human-level intelligence territory?

If the underlying search space was sufficiently general, we are OK,
there is no way to get boxed in except by the heuristic.

Wait:  we are not talking about the same thing here.

Analogous situation.  Imagine that John Horton Conway is trying to invent a
cellular automaton with particular characteristics - say, he has already
decided that the basic rules MUST show the global characteristic of having a
thing like a glider and a thing like a glider gun.  (This is equivalent to
us saying that we want to build a system that has the particular
characteristics that we colloquially call 'intelligence', and we will do it
with a system that is 

Re: [agi] Approximations of Knowledge

2008-06-25 Thread Richard Loosemore

Derek Zahn wrote:

Richard,

If I can make a guess at where Jim is coming from:

Clearly, intelligent systems CAN be produced.  Assuming we can
define intelligent system well enough to recognize it, we can
generate systems at random until one is found.  That is impractical,
however.  So, we can look at the problem as one of search
optimization.  Evolution produced intelligent systems through a
biased search, for example, so it is at least possible to improve
search over completely random generate and test.

What other ways can be used to speed up search?  Jim is suggesting
some methods that he believes may help.  If I understand what you've
said about your approach, you have some very different methods than
what he is proposing to focus the search.  I do not understand
exactly what Jim is proposing; presumably he is aiming to use his SAT
solver to guide the search toward areas that contain partial
solutions or promising partial models of some sort.

It seems to me very difficult to define the goal formally, very
difficult to develop a meta system in which a sufficiently broad
class of candidate systems can be expressed, and very difficult to
describe the splices or reductions or partial models in such a
way to smooth the fitness landscape and thus speed up search.  So I
don't know how practical such a plan is.

But (again assuming I understand Jim's approach) it avoids your
complex system arguments because it is not making any effort to
predict global behavior from the low-level system components, it's
just searching through possibilities.


I hear what you say here, but the crucial issue is defining this thing 
called intelligence.  And, in the end, that is where the complex systems 
argument makes itself felt (so this is not really avoiding the complex 
systems problem, but just hiding it).


Let me explain these thoughts.  If we really could only define 
'intelligent system' well enough to recognize it then the generate and 
test you are talking about would be extremely blind ... we would not 
make any specific design decisions, but generate completely random 
systems and say Is this one intelligent? each time we built one.


Clearly, that would be ridiculously slow (as you point out).  Even the 
evolutionary biassed search - in which you build simple systems and 
gradually elaborate them as you test them in combat - would still take a 
few billion years and a planet-sized computer.


But then you introduce the idea of speeding up the search in some way. 
Ahhh... now there's the rub.  To make the search more efficient, you 
have to have some idea of an error function:  you look at the 
intelligence of the current best try, and you feed that into a function 
that suggests what kind of changes in the low-level mechanisms will give 
rise to a *beneficial* change in the overall intelligence (an 
improvement, i.e.).  To do any better than random, you really must have 
an error function this almost the very definition of doing a search 
that is not random, no?  You have to have some idea of how a change in 
design will cause a change in high level behavior, and that is the error 
function.


If the system you are talking about is not complex, then, no problem: 
an error function is findable, at least in principle.  But the very 
definition of a complex system is that such an error function cannot 
(absolutely cannot) be found.  You cannot say, I need to improve the 
overall intelligence, *thus*, and THEREFORE I will make this change in 
the local mechanisms, because I have reason to believe that such a 
global change will be effected by this local change.  That is the one 
statement about a complex system that is verboten.


So it is that one quiet little statement about finding better ways to do 
the search that brings down the whole argument.  If intelligent systems 
can be built without making them complex, all well and good.  But if 
that is not possible (and the evidence indicates that it is not), then 
you must be very careful not to set up a research methodology in which 
you make the assumption that you are going to adjust the low level 
mechanisms in a way that will 'improve' the global performance in a 
desired way.  If anyone does include that implicit assumption in their 
methodology, they are unknowingly inserting a And Then A Miracle 
Happens Here step.


I shouold quickly add one comment about that last paragraph.  AI 
researchers clearly do do exactly what i have just said is impossible! 
They frequently look at the poor performance of an AI system and say I 
think a change in this mechanism will improve things ... and then, sure 
enough, they do get an improvement.  So does that mean my argument that 
there is a complex systems problem just wrong?  No:  I have clearly said 
(though many people have missed this point I think) that what AI 
researchers have been doing is implicitly using their understanding of 
human psychology (of their own minds, for the most part) to get ideas 
for how 

Re: [agi] Approximations of Knowledge

2008-06-25 Thread Jim Bromer



- Original Message 
From: Richard Loosemore Jim,

I'm sorry:  I cannot make any sense of what you say here.

I don't think you are understanding the technicalities of the argument I 
am presenting, because your very first sentence... But we can invent a 
'mathematics' or a program that can is just completely false.  In a 
complex system it is not possible to used analytic mathematics to 
predict the global behavior of the system given only the rules that 
determine the local mechanisms.  That is the very definition of a 
complex system (note:  this is a complex system in the technical sense 
of that term, which does not mean a complicated system in ordinary 
language).
Richard Loosemore
--
I don't feel that you are seriously interested in discussing the subject with 
me.  Let me know if you ever change your mind.
Jim Bromer









---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-24 Thread Steve Richfield
On 6/23/08, J. Andrew Rogers [EMAIL PROTECTED] wrote:


 Or it could simply mean that the vast majority of programmers and software
 monkeys are mediocre at best such that the handful of people you will meet
 with deep talent won't constitute a useful sample size.  Hell, even Brooks
 suggested as much and he was charitable. In all my years in software, I've
 only met a small number of people who were unambiguously wicked smart when
 it came to software, and while none of them could be confused with a
 completely mundane person they also did not have many other traits in common
 (though I will acknowledge they tend to rational and self-analytical to a
 degree that is rare in most people though this is not a trait exclusive to
 these people). Of course, *my* sample size is also small and so it does not
 count for much.


I completely agree with all of the above, though it says nothing relevant to
the point that I was trying to make. That point was that we and presumably
our AGIs will use our experience to focus inquiry in complex situations.
That these focused efforts fail more often than they succeed is good,
compared with the disastrous alternative of failing 99.99% of the time
because our inquiries are NOT focused.

Again, as you apparently missed it on my previous email - what would you
suggest as an alternative?

 Similarly, over the course of 100 projects...


 Eh? Over 100 projects?  These were either very small projects or you are
 older than Methuselah.


Both are correct. Also, I had many fewer employers, as I had a LOT of repeat
business. These would sometimes bring me in for a couple of weeks of shock
treatment when they felt it was needed.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-24 Thread Jim Bromer
Although I do suffer from an assortment of biases, I would not get worried to 
see any black man walking behind me at night.  For example, if I saw Andrew 
Young or Bill Cosby walking behind me I don't think I would be too worried. Or, 
if I was walking out of a campus library and a young black man carrying some 
books was walking behind me, I would not be too worried about that either. Your 
statement was way over the line, and it showed some really bad judgment.
Jim Bromer


- Original Message 
From: Steve Richfield [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, June 23, 2008 10:53:07 PM
Subject: Re: [agi] Approximations of Knowledge


Andy,
 
This is a PERFECT post, because it so perfectly illustrates a particular point 
of detachment from reality that is common among AGIers. In the real world we do 
certain things to achieve a good result, but when we design politically correct 
AGIs, we banish the very logic that allows us to function. For example, if you 
see a black man walking behind you at night, you rightly worry, but if you 
include that in your AGI design, you would be dismissed as a racist.
 
Effectively solving VERY VERY difficult problems, like why a particular 
corporation is failing after other experts have failed, is a multiple-step 
process that starts with narrowing down the vast field of possibilities. As 
others have already pointed out here, this is often done in a rather summary 
and non-probabilistic way. Perhaps all of the really successful programmers 
that you have known have had long hair, so if the programming is failing and 
the programmer has short hair, then maybe there is an attitude issue to look 
into. Of course this does NOT necessarily mean that there is any linkage at all 
- just another of many points to focus some attention to.
 
Similarly, over the course of 100 projects I have developed a long list of 
rules that help me find the problems with a tractable amount of effort. No, I 
don't usually tell others my poorly-formed rules because they prove absolutely 
NOTHING, only focus further effort. I have a special assortment of rules to 
apply whenever God is mentioned. After all, not everyone thinks that God has 
the same motivations, so SOME approach is needed to paradigm shift one 
person's statements to be able to be understood by another person. The posting 
you responded to was expressing one such rule. That having been said...
 
On 6/22/08, J. Andrew Rogers [EMAIL PROTECTED] wrote: 

Somewhere in the world, there is a PhD chemist and a born-again Christian on 
another mailing list ...the project had hit a serious snag, and so the 
investors brought in a consultant that would explain why the project was broken 
by defectively reasoning about dubious generalizations he pulled out of his 
ass...
 
Of course I don't make any such (I freely admit to dubious) generalizations to 
investors. However, I immediately drill down to find out exactly why THEY SAY 
that they didn't stop and reconsider their direction when it should have been 
obvious that things had gone off track. When I hear about how God just couldn't 
have led them astray, I quote what they said in my report and suggest that 
perhaps the problem is that God isn't also underwriting the investment with 
limitless funds.
 
How would YOU (or your AGI) handle such situations? Would you (or your AGI) 
ignore past empirical evidence because of lack of proof or political 
incorrectness?
 
Steve Richfield
 


 
agi | Archives  | Modify Your Subscription  


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-24 Thread Abram Demski
 And Abram said,
 A revised version of my argument would run something like this. As the
 approximation problem gets more demanding, it gets more difficult to
 devise logical heuristics. Increasingly, we must rely on intuitions
 tested by experiments. There then comes a point when making the
 distinction between the heuristic and the underlying search becomes
 unimportant; the method is all heuristic, so to speak. At this point
 we are simply using messy methods,

 I wondered if Abram was talking about the way an AI program should work or
 the way research into AI should work, or the way AI programs and research
 into AI should work?
 Jim Bromer

The passage quoted above was intended to reflect a necessary
progression as we design AIs for more and more demanding tasks, as if
some hypothetical researcher started with a narrow AI and was
attempting to generalize it. Of course, people on this list will be
more prone to try starting at the AGI end of the spectrum without
going through the progression.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-24 Thread Abram Demski
 I'm still not really satisfied, though, because I would personally
 stop at the stage when the heuristic started to get messy, and say,
 The problem is starting to become AI-complete, so at this point I
 should include a meta-level search to find a good heuristic for me,
 rather than trying to hard-code one...

 And at that point, your lab and my lab are essentially starting to do
 the same thing.  You need to start searching the space of possible
 heuristics in a systematic way, rather than just pick a hunch and go
 with it.

 The problem, though, is that you might already have gotten yourself into
 a You Can't Get There By Starting From Here situation.  Suppose your
 choice of basic logical formalism, and knowledge representation format
 (and the knowledge acquisition methods that MUST come along with that
 formalism) has boxed you into a corner in which there does not exist any
 choice of heuristic control mechanism that will get your system up into
 human-level intelligence territory?

If the underlying search space was sufficiently general, we are OK,
there is no way to get boxed in except by the heuristic.

This is what the mathematics is good for. An experiment, I think, will
not tell you this, since a formalism can cover almost everything but
not everything. For example, is a given notation for functions
Turing-complete, or merely primitive recursive? Primitive recursion is
amazingly expressive, so I think it would be easy to be fooled. But a
proof of Turing-completeness will suffice.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-24 Thread Richard Loosemore

Abram Demski wrote:

I'm still not really satisfied, though, because I would personally
stop at the stage when the heuristic started to get messy, and say,
The problem is starting to become AI-complete, so at this point I
should include a meta-level search to find a good heuristic for me,
rather than trying to hard-code one...

And at that point, your lab and my lab are essentially starting to do
the same thing.  You need to start searching the space of possible
heuristics in a systematic way, rather than just pick a hunch and go
with it.

The problem, though, is that you might already have gotten yourself into
a You Can't Get There By Starting From Here situation.  Suppose your
choice of basic logical formalism, and knowledge representation format
(and the knowledge acquisition methods that MUST come along with that
formalism) has boxed you into a corner in which there does not exist any
choice of heuristic control mechanism that will get your system up into
human-level intelligence territory?


If the underlying search space was sufficiently general, we are OK,
there is no way to get boxed in except by the heuristic.


Wait:  we are not talking about the same thing here.

Analogous situation.  Imagine that John Horton Conway is trying to 
invent a cellular automaton with particular characteristics - say, he 
has already decided that the basic rules MUST show the global 
characteristic of having a thing like a glider and a thing like a glider 
gun.  (This is equivalent to us saying that we want to build a system 
that has the particular characteristics that we colloquially call 
'intelligence', and we will do it with a system that is complex).


But now Conway boxes himself into a corner:  he decides, a priori, that 
the cellular automaton MUST have three sexes, instead of the two sexes 
that we are familiar with in Game of Life.  So three states for every 
cell.  But now (we will suppose, for the sake of the argument), it just 
happens to be the case that there do not exist ANY 3-sex cellular 
automata in which there are emergent patterns equivalent to the glider 
and glider gun.  Now, alas, Conway is up poop creek without an 
instrument of propulsion - he can search through the entire space of 
3-sex automata until the end of the universe, and he will never build a 
system that satisfies his requirement.


This is the boxed-in corner that I am talking about.  We decide that 
intelligence must be built with some choice of logical formalism, plus 
heuristics, and we assume that we can always keep jiggling the 
heuristics until the system as a whole shows a significant degree of 
intelligence.  But there is nothing in the world that says that this is 
possible.  We could be in exactly the same system as our hypothetical 
Conway, trying to find a solution in a part of the space of all possible 
systems in which there do not exist any solutions.


The real killer is that, unlike the example you mention below, 
mathematics cannot possibly tell you that this part of the space does 
not contain any solutions.  That is the whole point of complex systems, 
n'est pas?  No analysis will let you know what the global properties are 
without doing a brute force exploration of (simulations of) the system.



Richard Loosemore




This is what the mathematics is good for. An experiment, I think, will
not tell you this, since a formalism can cover almost everything but
not everything. For example, is a given notation for functions
Turing-complete, or merely primitive recursive? Primitive recursion is
amazingly expressive, so I think it would be easy to be fooled. But a
proof of Turing-completeness will suffice.






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-23 Thread Richard Loosemore

Abram Demski wrote:

To be honest, I am not completely satisfied with my conclusion on the
post you refer to. I'm not so sure now that the fundamental split
between logical/messy methods should occur at the line between perfect
 approximate methods. This is one type of messiness, but one only. I
think you are referring to a related but different messiness: not
knowing what kind of environment your AI is dealing with. Since we
don't know which kinds of models will fit best with the world, we
should (1) trust our intuitions to some extent, and (2) try things and
see how well they work. This is as Loosemore suggests.

On the other hand, I do not want to agree with Loosemore too strongly.
Mathematics and mathematical proof is a very important tool, and I
feel like he wants to reject it. His image of an AGI seems to be a
system built up out of totally dumb pieces, with intelligence emerging
unexpectedly. Mine is a system built out of somewhat smart pieces,
cooperating to build somewhat smarter pieces, and so on. Each piece
has provable smarts.


Okay, let me try to make some kind of reply to your comments here and in 
your original blog post.


It is very important to understand that the paper I wrote was about the 
methodology of AGI research, not about specific theories/models/systems 
within AGI.  It is about the way that we come up with ideas for systems 
and the way that we explore those systems, not about the content of 
anyone's particular ideas.


So, in the above text you refer to a split between logical and messy 
methods - now, it may well be that my paper would lead someone to 
embrace 'messy' methods and reject 'logical' ones, but that is a side 
effect of the argument, not the argument itself.  It does happen to be 
the case that I believe that logic-based methods are mistaken, but I 
could be wrong about that, and it could turn out that the best way to 
build an AGI is with a completely logic-based AGI, along with just one 
small mechanism that was Complex.  That would be perfectly consistent 
with my argument (though a little surprising, for other reasons).


Similarly, you suggest that I have an image of an AGI that is built out 
of totally dumb pieces, with intelligence emerging unexpectedly.  Some 
people have suggested that that is my view of AGI, but whether or not 
those people are correct in saying that [aside:  they are not!], that 
does not relate to the argument I presented, because it is all about 
specific AGI design preferences, whereas the thing that I have called 
the Complex Systems Problem is fairly neutral on most design decisions.


In your original blog post, also, you mention the way that AGI planning 
mechanisms can be built in such a way that they contain a logical 
substrate, but with heuristics that force the systems to make 
'sub-optimal' choices.  This is a specific instance of a more general 
design pattern:  logical engines that have 'inference control 
mechanisms' riding on their backs, preventing them from deducing 
everything in the universe whilst trying to come to a simple decision. 
The problem is that you have portrayed the distinction between 'pure' 
logical mechanisms and 'messy' systems that have heuristics riding on 
their backs, as equivalent to a distinction that you thought I was 
making between non-complex and complex AGI systems.  I hope you can see 
now that this is not what I was trying to argue.  My target would be the 
methodologies that people use to decide such questions as which 
heuristics to using in a planning mechanism, whether the representation 
used by the planning mechanism can co-exist with the learning 
mechanisms, and so on.


Now, having said all of that, what does the argument actually say, and 
does it make *any* claims at all about what sort of content to put in an 
AGI design?


The argument says that IF intelligent systems belong to the 'complex 
systems' class, THEN a it would be a dreadful mistake to use a certain 
type of scientific or engineering approach to build intelligent systems. 
 I tried to capture this with an analogy at one point:  if you we John 
Horton Conway, sitting down on Day 1 of his project to find a cellular 
automaton with certain global properties, you would not be able to use 
any standard scientific, engineering or mathematical tools to discover 
the rules that should go into your system - you would, in fact, have no 
option but to try rules at random until you found rules that gave the 
global behavior that you desired.


My point was that a modified form of that same problem (that inability 
to use our scientific intuitions to just go from a desired global 
behavior to the mechanisms that will generate that global behavior) 
could apply to the question of building an AGI.  I do not suggest that 
the problem will manifest itself in exactly the same way (it is not that 
we would make zero progress with current techniques, and have to use 
completely random trial and error, like Conway had to), but 

Re: [agi] Approximations of Knowledge

2008-06-23 Thread Abram Demski
Since combinatorial search problems are so common to artificial
intelligence, it has obvious applications. If such an algorithm can be
made, it seems like it could be used *everywhere* inside an AGI:
deduction (solve for cases consistent with constraints), induction
(search for the best model), planning... Particularly if there is a
generalization to soft constraint problems.

On 6/22/08, Jim Bromer [EMAIL PROTECTED] wrote:
 Abram,
 I did not group you with probability buffs.  One of the errors I feel that
 writers make when their field is controversial is that they begin
 representing their own opinions from the vantage of countering critics.
 Unfortunately, I am one of those writers, (or perhaps I am just projecting).
  But my comment about the probability buffs wasn't directed toward you, I
 was just using it as an exemplar (of something or another).

 Your comments seem to make sense to me although I don't know where you are
 heading.  You said:
 what should be hoped for is convergence to (nearly) correct models of
 (small parts of) the universe. So I suppose that rather than asking for
 meaning in a fuzzy logic, I should be asking for clear accounts of
 convergence properties...

 When you have to find a way to tie together components of knowledge together
 you typically have to achieve another kind of convergence.  Even if these
 'components' of knowledge are reliable, they cannot usually be converged
 easily due to the complexity that their interrelations with other kinds of
 knowledge (other 'components' of knowledge) will cause.

 To follow up on what I previously said, if my logic program works it will
 mean that I can combine and test logical formulas of up to a few hundred
 distinct variables and find satisfiable values for these combinations in a
 relatively short period of time.  I think this will be an important method
 to test whether AI can be advanced by advancements in handling complexity
 even though some people do not feel that logical methods are appropriate to
 use on multiple source complexity.  As you seem to appreciate, logic can
 still be brought to to the field even though it is not a purely logical game
 that is to be played.

 When I begin to develop some simple theories about a subject matter, I will
 typically create hundreds of minor variations concerning those theories over
 a period of time.  I cannot hold all those variations of the conjecture in
 consciousness at any one moment, but I do feel that they can come to mind in
 response to a set of conditions for which that particular set of variations
 was created for.  So while a simple logical theory (about some subject) may
 be expressible with only a few terms, when you examine all of the possible
 variations that can be brought into conscious consideration in response to a
 particular set of stimuli, I think you may find that the theories could be
 more accurately expressed using hundreds of distinct logical values.

 If this conjecture of mine turns out to be true, and if I can actually get
 my new logical methods to work, then I believe that this new range of
 logical methods may show whether advancements in complexity can make a
 difference to AI even if its application does not immediately result in
 human level of intelligence.

 Jim Bromer


 - Original Message 
 From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Sunday, June 22, 2008 4:38:02 PM
 Subject: Re: [agi] Approximations of Knowledge

 Well, since you found my blog, you probably are grouping me somewhat
 with the probability buffs. I have stated that I will not be
 interested in any other fuzzy logic unless it is accompanied by a
 careful account of the meaning of the numbers.

 You have stated that it is unrealistic to expect a logical model to
 reflect the world perfectly. The intuition behind this seems clear.
 Instead, what should be hoped for is convergence to (nearly) correct
 models of (small parts of) the universe. So I suppose that rather than
 asking for meaning in a fuzzy logic, I should be asking for clear
 accounts of convergence properties... but my intuition says that from
 clear meaning, everything else follows.






 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-23 Thread Abram Demski
Thanks for the comments. My replies:



 It does happen to be the case that I
 believe that logic-based methods are mistaken, but I could be wrong about
 that, and it could turn out that the best way to build an AGI is with a
 completely logic-based AGI, along with just one small mechanism that was
 Complex.

Logical methods are quite Complex. This was part of my point. Logical
deduction in any sufficiently complicated formalism satisfies both
types of global-local disconnect that I mentioned (undecidability, and
computational irreducibility). If this were not the case, it seems
your argument would be much less appealing. (In particular, there
would be one less argument for the mind being complex; we could not
say logic has some subset of the mind's capabilities; a brute-force
theorem prover is a complex system; therefore the mind is probably a
complex system.)

 Similarly, you suggest that I have an image of an AGI that is built out of
 totally dumb pieces, with intelligence emerging unexpectedly.  Some people
 have suggested that that is my view of AGI, but whether or not those people
 are correct in saying that [aside:  they are not!]

Apologies. But your arguments do appear to point in that direction.

 In your original blog post, also, you mention the way that AGI planning
 The problem is that you have portrayed the
 distinction between 'pure' logical mechanisms and 'messy' systems that have
 heuristics riding on their backs, as equivalent to a distinction that you
 thought I was making between non-complex and complex AGI systems.  I hope
 you can see now that this is not what I was trying to argue.

You are right, this characterization is quite bad. I think that is
part of what was making me uneasy about my conclusion. My intention
was not that approximation should always equal a logical search with
messy heuristics stacked upon it. In fact, I had two conflicting
images in mind:use

-A logical search with logical heuristics (such as greedy methods for
NP-complete problems, which are guaranteed to be fairly near optimal)

-A messy method (such as a neural net or swarm) that somehow gives
you an answer without precise logic

A revised version of my argument would run something like this. As the
approximation problem gets more demanding, it gets more difficult to
devise logical heuristics. Increasingly, we must rely on intuitions
tested by experiments. There then comes a point when making the
distinction between the heuristic and the underlying search becomes
unimportant; the method is all heuristic, so to speak. At this point
we are simply using messy methods.

I'm still not really satisfied, though, because I would personally
stop at the stage when the heuristic started to get messy, and say,
The problem is starting to become AI-complete, so at this point I
should include a meta-level search to find a good heuristic for me,
rather than trying to hard-code one...

 Finally, I should mention one general misunderstanding about mathematics.
  This argument has a superficial similarity to Godel's theorem, but you
 should not be deceived by that.  Godel was talking about formal deductive
 systems, and the fact that there are unreachable truths within such systems.
  My argument is about the feasibility of scientific discovery, when applied
 to systems of different sorts.  These are two very different domains.

I think it is fair to say that I accounted for this. In particular, I
said: It's this second kind of irreducibility, computational
irreducibility, that I see as more relevant to AI. (Actually, I do
see Godel's theorem as relevant to AI; I should have been more
specific and said relevant to AI's global-local disconnect.)


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-23 Thread Jim Bromer
Loosemore said,
It is very important to understand that the paper I wrote was about the 
methodology of AGI research, not about specific theories/models/systems 
within AGI.  It is about the way that we come up with ideas for systems 
and the way that we explore those systems, not about the content of 
anyone's particular ideas.

And Abram said,
A revised version of my argument would run something like this. As the
approximation problem gets more demanding, it gets more difficult to
devise logical heuristics. Increasingly, we must rely on intuitions
tested by experiments. There then comes a point when making the
distinction between the heuristic and the underlying search becomes
unimportant; the method is all heuristic, so to speak. At this point
we are simply using messy methods,

I wondered if Abram was talking about the way an AI program should work or the 
way research into AI should work, or the way AI programs and research into AI 
should work?
Jim Bromer


- Original Message 
From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, June 23, 2008 3:11:16 PM
Subject: Re: [agi] Approximations of Knowledge

Thanks for the comments. My replies:



 It does happen to be the case that I
 believe that logic-based methods are mistaken, but I could be wrong about
 that, and it could turn out that the best way to build an AGI is with a
 completely logic-based AGI, along with just one small mechanism that was
 Complex.

Logical methods are quite Complex. This was part of my point. Logical
deduction in any sufficiently complicated formalism satisfies both
types of global-local disconnect that I mentioned (undecidability, and
computational irreducibility). If this were not the case, it seems
your argument would be much less appealing. (In particular, there
would be one less argument for the mind being complex; we could not
say logic has some subset of the mind's capabilities; a brute-force
theorem prover is a complex system; therefore the mind is probably a
complex system.)

 Similarly, you suggest that I have an image of an AGI that is built out of
 totally dumb pieces, with intelligence emerging unexpectedly.  Some people
 have suggested that that is my view of AGI, but whether or not those people
 are correct in saying that [aside:  they are not!]

Apologies. But your arguments do appear to point in that direction.

 In your original blog post, also, you mention the way that AGI planning
 The problem is that you have portrayed the
 distinction between 'pure' logical mechanisms and 'messy' systems that have
 heuristics riding on their backs, as equivalent to a distinction that you
 thought I was making between non-complex and complex AGI systems.  I hope
 you can see now that this is not what I was trying to argue.

You are right, this characterization is quite bad. I think that is
part of what was making me uneasy about my conclusion. My intention
was not that approximation should always equal a logical search with
messy heuristics stacked upon it. In fact, I had two conflicting
images in mind:use

-A logical search with logical heuristics (such as greedy methods for
NP-complete problems, which are guaranteed to be fairly near optimal)

-A messy method (such as a neural net or swarm) that somehow gives
you an answer without precise logic

A revised version of my argument would run something like this. As the
approximation problem gets more demanding, it gets more difficult to
devise logical heuristics. Increasingly, we must rely on intuitions
tested by experiments. There then comes a point when making the
distinction between the heuristic and the underlying search becomes
unimportant; the method is all heuristic, so to speak. At this point
we are simply using messy methods.

I'm still not really satisfied, though, because I would personally
stop at the stage when the heuristic started to get messy, and say,
The problem is starting to become AI-complete, so at this point I
should include a meta-level search to find a good heuristic for me,
rather than trying to hard-code one...

 Finally, I should mention one general misunderstanding about mathematics.
  This argument has a superficial similarity to Godel's theorem, but you
 should not be deceived by that.  Godel was talking about formal deductive
 systems, and the fact that there are unreachable truths within such systems.
  My argument is about the feasibility of scientific discovery, when applied
 to systems of different sorts.  These are two very different domains.

I think it is fair to say that I accounted for this. In particular, I
said: It's this second kind of irreducibility, computational
irreducibility, that I see as more relevant to AI. (Actually, I do
see Godel's theorem as relevant to AI; I should have been more
specific and said relevant to AI's global-local disconnect.)


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member

Re: [agi] Approximations of Knowledge

2008-06-23 Thread Richard Loosemore

Abram Demski wrote:

Thanks for the comments. My replies:




It does happen to be the case that I
believe that logic-based methods are mistaken, but I could be wrong about
that, and it could turn out that the best way to build an AGI is with a
completely logic-based AGI, along with just one small mechanism that was
Complex.


Logical methods are quite Complex. This was part of my point. Logical
deduction in any sufficiently complicated formalism satisfies both
types of global-local disconnect that I mentioned (undecidability, and
computational irreducibility). If this were not the case, it seems
your argument would be much less appealing. (In particular, there
would be one less argument for the mind being complex; we could not
say logic has some subset of the mind's capabilities; a brute-force
theorem prover is a complex system; therefore the mind is probably a
complex system.)


Okay, I made a mistake in my choice of words (I knew it when I wrote 
them, but neglected to go back and correct!).


I did not mean to imply that I *require* some complexity in an AGI 
formalism, and that finding some complexity would be a good thing, end 
of story, problem solved, etc.  So for example, you are correct to point 
out that most 'logical' systems do exhibit complexity, provided they do 
something realistically approximating intelligence.


Instead, what I meant to say was that we are not setting up our research 
procedures to cope with the complexity.  So, it might turn out that a 
good, robust AGI can be built with something like a regular logic-based 
formalism, BUT with just a few small aspects that are complex  but 
unfortunately we are currently not able to discover what those complex 
parts should be like, because our current methodology is to use blind 
hunch and intuition (i.e. heuristics that look as though the will 
work).  Going back to your planning system example, it might be the case 
that only one choice of heuristic control mechanism will actually make a 
given logical formalism converge on fully intelligent behavior, but 
there might be 10^100 choices of possible control mechanism, and our 
current method for searching through the possibilities is to use 
intuition to pick likely candidates.


The point here is that a small amount of the factors that give rise to 
complexity can actualy have a massive effect on the behavior of the 
system, but people are today acting as if a small amount of 
complexity-inducing characteristics means a small amount of 
unpredictability in the behavior.  This is simply not the case.








Similarly, you suggest that I have an image of an AGI that is built out of
totally dumb pieces, with intelligence emerging unexpectedly.  Some people
have suggested that that is my view of AGI, but whether or not those people
are correct in saying that [aside:  they are not!]


Apologies. But your arguments do appear to point in that direction.


In your original blog post, also, you mention the way that AGI planning
The problem is that you have portrayed the
distinction between 'pure' logical mechanisms and 'messy' systems that have
heuristics riding on their backs, as equivalent to a distinction that you
thought I was making between non-complex and complex AGI systems.  I hope
you can see now that this is not what I was trying to argue.


You are right, this characterization is quite bad. I think that is
part of what was making me uneasy about my conclusion. My intention
was not that approximation should always equal a logical search with
messy heuristics stacked upon it. In fact, I had two conflicting
images in mind:use

-A logical search with logical heuristics (such as greedy methods for
NP-complete problems, which are guaranteed to be fairly near optimal)

-A messy method (such as a neural net or swarm) that somehow gives
you an answer without precise logic

A revised version of my argument would run something like this. As the
approximation problem gets more demanding, it gets more difficult to
devise logical heuristics. Increasingly, we must rely on intuitions
tested by experiments. There then comes a point when making the
distinction between the heuristic and the underlying search becomes
unimportant; the method is all heuristic, so to speak. At this point
we are simply using messy methods.


Ah, I agree completely here.  We are taling about a Wag The Dog 
scenario, where everyone focusses on the pristine beauty of the logical 
formalism, but turns a blind eye to the (assumed-to-be) trivial 
heuristic control mechanisms   but in the end it is the heuristic 
control mechanism that is responsible for almost all of the actual behavior.






I'm still not really satisfied, though, because I would personally
stop at the stage when the heuristic started to get messy, and say,
The problem is starting to become AI-complete, so at this point I
should include a meta-level search to find a good heuristic for me,
rather than trying to hard-code one...


And at that point, 

Re: [agi] Approximations of Knowledge

2008-06-23 Thread Richard Loosemore

Jim Bromer wrote:

Loosemore said,
It is very important to understand that the paper I wrote was about the 
methodology of AGI research, not about specific theories/models/systems 
within AGI.  It is about the way that we come up with ideas for systems 
and the way that we explore those systems, not about the content of 
anyone's particular ideas.


And Abram said,
A revised version of my argument would run something like this. As the
approximation problem gets more demanding, it gets more difficult to
devise logical heuristics. Increasingly, we must rely on intuitions
tested by experiments. There then comes a point when making the
distinction between the heuristic and the underlying search becomes
unimportant; the method is all heuristic, so to speak. At this point
we are simply using messy methods,

I wondered if Abram was talking about the way an AI program should work or the 
way research into AI should work, or the way AI programs and research into AI 
should work?
Jim Bromer


I interpreted him (see parallel post) to be referring still to the 
question of how to deal with planning systems, where there is a 
formalism (the logic substructure) which cannot be allowed to run its 
methods to completion (because they would take too long) and which 
therefore has to use approximation methods, or heuristics, to guess 
which are the most likely best planning choices.  When the system is 
required to do more real-world-type performance (as in an AGI, rather 
than a narrow AI) it's behavior will be dominated by the heuristics.


He then went on to talk about methodology:  do we just use intuitions to 
pick heuristics, or do we make the methodology more systematic by 
engaging in automatic searches of the space of possible heuristics?


My perspective on that question would back up one step:  if it is a 
complex system we are dealing with, we should have been using 
systematic, automatic searches of the design space BEFORE, when we were 
choosing whether or not to do planning with a Logic+Heuristics design!


But of course, that would be wildly, extravagantly infeasible.  So, 
instead, I propose to start from a basic design that is as similar as 
possible to the human design, and then do our systematic, automatic 
search (of the space of mechanism-designs) in an outward direction from 
that human-cognition baseline.  If intelligence involves even a small 
amount of complexity, it could well be that this is the only feasible 
way to ever get an intelligence up and running.


Treat it, in other words, as a calculus of variations problem.




Richard Loosemore.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-23 Thread Steve Richfield
Andy,

This is a PERFECT post, because it so perfectly illustrates a particular
point of detachment from reality that is common among AGIers. In the real
world we do certain things to achieve a good result, but when we design
politically correct AGIs, we banish the very logic that allows us to
function. For example, if you see a black man walking behind you at night,
you rightly worry, but if you include that in your AGI design, you would be
dismissed as a racist.

Effectively solving VERY VERY difficult problems, like why a particular
corporation is failing after other experts have failed, is a multiple-step
process that starts with narrowing down the vast field of possibilities. As
others have already pointed out here, this is often done in a rather summary
and non-probabilistic way. Perhaps all of the really successful programmers
that you have known have had long hair, so if the programming is failing and
the programmer has short hair, then maybe there is an attitude issue to look
into. Of course this does NOT necessarily mean that there is any linkage at
all - just another of many points to focus some attention to.

Similarly, over the course of 100 projects I have developed a long list of
rules that help me find the problems with a tractable amount of effort.
No, I don't usually tell others my poorly-formed rules because they prove
absolutely NOTHING, only focus further effort. I have a special assortment
of rules to apply whenever God is mentioned. After all, not everyone thinks
that God has the same motivations, so SOME approach is needed to paradigm
shift one person's statements to be able to be understood by another
person. The posting you responded to was expressing one such rule. That
having been said...

On 6/22/08, J. Andrew Rogers [EMAIL PROTECTED] wrote:


 Somewhere in the world, there is a PhD chemist and a born-again Christian
 on another mailing list ...the project had hit a serious snag, and so the
 investors brought in a consultant that would explain why the project was
 broken by defectively reasoning about dubious generalizations he pulled out
 of his ass...


Of course I don't make any such (I freely admit to dubious) generalizations
to investors. However, I immediately drill down to find out exactly why THEY
SAY that they didn't stop and reconsider their direction when it should have
been obvious that things had gone off track. When I hear about how God just
couldn't have led them astray, I quote what they said in my report and
suggest that perhaps the problem is that God isn't also underwriting the
investment with limitless funds.

How would YOU (or your AGI) handle such situations? Would you (or your AGI)
ignore past empirical evidence because of lack of proof or political
incorrectness?

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-23 Thread J. Andrew Rogers


On Jun 23, 2008, at 7:53 PM, Steve Richfield wrote:

Andy,



The use of diminutives is considered rude in many parts of anglo- 
culture if the individual does not use it to identify themselves,  
though I realize it is common practice in some regions of the US. When  
in doubt, use the given form.



 This is a PERFECT post, because it so perfectly illustrates a  
particular point of detachment from reality that is common among  
AGIers. In the real world we do certain things to achieve a good  
result, but when we design politically correct AGIs, we banish the  
very logic that allows us to function. For example, if you see a  
black man walking behind you at night, you rightly worry, but if you  
include that in your AGI design, you would be dismissed as a racist.



You have clearly confused me with someone else.


Effectively solving VERY VERY difficult problems, like why a  
particular corporation is failing after other experts have failed,  
is a multiple-step process that starts with narrowing down the vast  
field of possibilities. As others have already pointed out here,  
this is often done in a rather summary and non-probabilistic way.  
Perhaps all of the really successful programmers that you have known  
have had long hair, so if the programming is failing and the  
programmer has short hair, then maybe there is an attitude issue to  
look into. Of course this does NOT necessarily mean that there is  
any linkage at all - just another of many points to focus some  
attention to.



Or it could simply mean that the vast majority of programmers and  
software monkeys are mediocre at best such that the handful of people  
you will meet with deep talent won't constitute a useful sample size.   
Hell, even Brooks suggested as much and he was charitable. In all my  
years in software, I've only met a small number of people who were  
unambiguously wicked smart when it came to software, and while none of  
them could be confused with a completely mundane person they also did  
not have many other traits in common (though I will acknowledge they  
tend to rational and self-analytical to a degree that is rare in most  
people though this is not a trait exclusive to these people). Of  
course, *my* sample size is also small and so it does not count for  
much.




Similarly, over the course of 100 projects...



Eh? Over 100 projects?  These were either very small projects or you  
are older than Methuselah.  I've worked on a lot of projects, but  
nowhere near 100 and I was a consultant for many years.



J. Andrew Rogers


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-22 Thread Jim Bromer
Abram Demski said:
To be honest, I am not completely satisfied with my conclusion on the
post you refer to. I'm not so sure now that the fundamental split
between logical/messy methods should occur at the line between perfect
 approximate methods. This is one type of messiness, but one only. I
think you are referring to a related but different messiness: not
knowing what kind of environment your AI is dealing with. Since we
don't know which kinds of models will fit best with the world, we
should (1) trust our intuitions to some extent, and (2) try things and
see how well they work...
Mathematics and mathematical proof is a very important tool...
Mine is a system built out of somewhat smart pieces,
cooperating to build somewhat smarter pieces, and so on. Each piece
has provable smarts.

Mathematics can be extended to include new kinds of relations and systems.  One 
of the problems I have had with AI-probability buffs is that there are other 
ways to deal with knowledge that is only partially understood and this kind of 
complexity can be extended to measurable quantities as well.  Notice that 
economics is not just probability.  There are measurable quantities in 
economics that are not based solely on the economics of money.

We cannot make perfect decisions.  However, we can often make fairly good 
decisions even when based on partial knowledge.  A conclusion however, should 
not be taken as a reliable rule unless it has withstood numerous tests.  These 
empirical tests of a conclusion usually cause them to be modified.  Even a good 
conclusion will typically be modified by conditional variations after be 
extensively tested.  That is the nature of expertise.

Our conclusions are often only approximations, but they can contain 
unarticulated links to other possibilities that may indicate other ways of 
looking at the data or conditional variations to the base conclusion.

Jim Bromer


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-22 Thread Jim Bromer


On 6/21/08, I wrote: 
The major problem I have is that writing a really really complicated computer 
program is really really difficult.
--
Steve Richfield replied:
Jim,

The ONLY rational approach to this (that I know of) is to construct an engine 
that develops and applies machine knowledge, wisdom, or whatever, and NOT write 
code yourself that actually deals with articles of knowledge/wisdom.
-

I agree with that, (assuming that I understand what you meant). 
-- 
Steve wrote:

REALLY complex systems may require multi-level interpreters, where a low-level 
interpreter provides a pseudo-machine on which to program a really smart 
high-level interpreter, on which you program your AGI. In ~1970 I wrote an 
ALGOL/FORTRAN/BASIC compiler that ran in just 16K bytes this way. At the bottom 
was a pseudo-computer whose primitives were fundamental to compiling. That 
pseudo-machine was then fed a program to read BNF and make compilers, which was 
then fed a BNF description of my compiler, with the output being my compiler in 
pseudo-machine code. One feature of this approach is that for anything to work, 
everything had to work, so once past initial debugging, it worked perfectly! 
Contrast this with modern methods that consume megabytes and never work quite 
right.
--

A compiler may be a useful tool to use in an advanced AI program (just as we 
all use compilers in our programming), but I don't feel that a compiler is a 
good basis for or a good metaphor for advanced AI.

--
Steve wrote:

The more complex the software, the better the design must be, and the more 
protected the execution must be. You can NEVER anticipate everything that might 
go into a program, so they must fail ever so softly.
 
Much of what I have been challenging others on this form for came out of the 
analysis and design of Dr. Eliza. The real world definitely has some 
interesting structure, e.g. the figure 6 shape of cause-and-effect chains, and 
that problems are a phenomenon that exists behind people's eyeballs and NOT 
otherwise in the real world. Ignoring such things and diving in and hoping 
that machine intelligence will resolve all (as many/most here seem to believe) 
IMHO is a rookie error that leads nowhere useful.
Steve Richfield
---

I don't think that most people in this group think that machine intelligence 
will resolve all the remaining problems in designing artificial intelligence, 
although I have talked to people who feel that way, and the lack of discussion 
about resolving some of the complexity issues does seem curious to me.  Where 
are they coming from?  I don't know.  I think most of the people feel that once 
they get their basic programs working, that they will be able to figure out the 
rest on the fly.  This method hasn't worked yet, but as I mentioned I do think 
it has something to do with the difficulty of writing complicated computer 
programs. I know that you are one of the outspoken critics of faith-based 
programming, so at least there is some consistency in your comments.  I mention 
this because, I (seriously) believe that that the Lord may have indicated that 
my algorithm to solve the logical satisfiability problem will work, and if this 
is true, then that may mean
 that the algorithm may help resolve some lesser logical complexity problems.  
Although we cannot use pure logic to represent knowable knowledge, I can use 
logic to represent theory-like relations between references to knowable 
components of knowledge.  (By the way, please note that I did not claim that I 
presently have a polynomial time solution to SAT, and I did not say that I was 
absolutely certain that God pronounced my SAT algorithm to be workable.  I have 
carefully qualified my statements about this.  I would also suggest that you 
think about the fact that we have to use different kinds of reasoning with 
different kinds of questions.  Regardless of your own beliefs, the topic about 
the necessity of using different kinds of reasoning for different kinds of 
question is very relevant to discussions about advanced AI.)

What do you mean by the figure 6 shape of cause-and-effect chains.  It must 
refer to some kind of feedback-like effect.

Jim Bromer



  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-22 Thread Abram Demski
Well, since you found my blog, you probably are grouping me somewhat
with the probability buffs. I have stated that I will not be
interested in any other fuzzy logic unless it is accompanied by a
careful account of the meaning of the numbers.

You have stated that it is unrealistic to expect a logical model to
reflect the world perfectly. The intuition behind this seems clear.
Instead, what should be hoped for is convergence to (nearly) correct
models of (small parts of) the universe. So I suppose that rather than
asking for meaning in a fuzzy logic, I should be asking for clear
accounts of convergence properties... but my intuition says that from
clear meaning, everything else follows.

On Sun, Jun 22, 2008 at 9:45 AM, Jim Bromer [EMAIL PROTECTED] wrote:
 Abram Demski said:
 To be honest, I am not completely satisfied with my conclusion on the
 post you refer to. I'm not so sure now that the fundamental split
 between logical/messy methods should occur at the line between perfect
  approximate methods. This is one type of messiness, but one only. I
 think you are referring to a related but different messiness: not
 knowing what kind of environment your AI is dealing with. Since we
 don't know which kinds of models will fit best with the world, we
 should (1) trust our intuitions to some extent, and (2) try things and
 see how well they work...
 Mathematics and mathematical proof is a very important tool...
 Mine is a system built out of somewhat smart pieces,
 cooperating to build somewhat smarter pieces, and so on. Each piece
 has provable smarts.
 
 Mathematics can be extended to include new kinds of relations and systems.
 One of the problems I have had with AI-probability buffs is that there are
 other ways to deal with knowledge that is only partially understood and this
 kind of complexity can be extended to measurable quantities as well.  Notice
 that economics is not just probability.  There are measurable quantities in
 economics that are not based solely on the economics of money.

 We cannot make perfect decisions.  However, we can often make fairly good
 decisions even when based on partial knowledge.  A conclusion however,
 should not be taken as a reliable rule unless it has withstood numerous
 tests.  These empirical tests of a conclusion usually cause them to be
 modified.  Even a good conclusion will typically be modified by conditional
 variations after be extensively tested.  That is the nature of expertise.

 Our conclusions are often only approximations, but they can contain
 unarticulated links to other possibilities that may indicate other ways of
 looking at the data or conditional variations to the base conclusion.

 Jim Bromer



 
 agi | Archives | Modify Your Subscription


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-22 Thread Steve Richfield
Jim,

On 6/22/08, Jim Bromer [EMAIL PROTECTED] wrote:


  A compiler may be a useful tool to use in an advanced AI program (just as
 we all use compilers in our programming), but I don't feel that a compiler
 is a good basis for or a good metaphor for advanced AI.


A compiler is just another complicated computer program. The sorts of
methods I described are applicable to ALL complicated programs. I know of no
exceptions.

   --
 Steve wrote:
 The more complex the software, the better the design must be, and the more
 protected the execution must be. You can NEVER anticipate everything that
 might go into a program, so they must fail ever so softly.

 Much of what I have been challenging others on this form for came out of
 the analysis and design of Dr. Eliza. The real world definitely has some
 interesting structure, e.g. the figure 6 shape of cause-and-effect chains,
 and that problems are a phenomenon that exists behind people's eyeballs and
 NOT otherwise in the real world. Ignoring such things and diving in and
 hoping that machine intelligence will resolve all (as many/most here seem to
 believe) IMHO is a rookie error that leads nowhere useful.
 Steve Richfield
 ---

 I don't think that most people in this group think that machine
 intelligence will resolve all the remaining problems in designing artificial
 intelligence, although I have talked to people who feel that way, and the
 lack of discussion about resolving some of the complexity issues does seem
 curious to me.


I simply attribute this to rookie error - but many of the people on this
forum are definitely NOT rookies. Hmmm.

Where are they coming from?  I don't know.  I think most of the people
 feel that once they get their basic programs working, that they will be able
 to figure out the rest on the fly.  This method hasn't worked yet, but as I
 mentioned I do think it has something to do with the difficulty of writing
 complicated computer programs. I know that you are one of the outspoken
 critics of faith-based programming,


YES - and you said it even better than I have!

   so at least there is some consistency in your comments.  I mention this
 because, I (seriously) believe that that the Lord may have indicated that my
 algorithm to solve the logical satisfiability problem will work, and if this
 is true, then that may mean that the algorithm may help resolve some lesser
 logical complexity problems.


Most of my working career has been as a genuine consultant (and not just an
unemployed programmer). I am typically hired by a major investor. My
specialty is resurrecting projects that are in technological trouble. At the
heart of the most troubled projects. I typically find either a born-again
Christian or a PhD Chemist. These people make the same bad decisions from
faith. The Christian's faith is that God wouldn't lead them SO astray, so
abandoning the project would in effect be abandoning their faith in God -
which of course leads straight to Hell. The Chemist has heard all of the
stories of perseverance leading to breakthrough discoveries, and if you KNOW
that the solution is there just waiting to be found, then just keep on
plugging away. These both lead to projects that stumble on and on long after
any sane person would have found another better way. Christians tend to make
good programmers, but really awful project managers.


Although we cannot use pure logic to represent knowable knowledge, I
 can use logic to represent theory-like relations between references to
 knowable components of knowledge.  (By the way, please note that I did not
 claim that I presently have a polynomial time solution to SAT, and I did not
 say that I was absolutely certain that God pronounced my SAT algorithm to be
 workable.


Are you waiting for me to make such a pronouncement?!

   I have carefully qualified my statements about this.  I would also
 suggest that you think about the fact that we have to use different kinds of
 reasoning with different kinds of questions.  Regardless of your own
 beliefs, the topic about the necessity of using different kinds of reasoning
 for different kinds of question is very relevant to discussions about
 advanced AI.)

 What do you mean by the figure 6 shape of cause-and-effect chains.  It must
 refer to some kind of feedback-like effect.


EVERYTHING works by cause and effect - even God's work, because he is
responding to what he sees, and therefore HE is but another link. Where
things are dynamically changing, there is little opportunity to run over to
your computer and inquire about what to do about things you don't like.
However, where things appear to be both stable and undesirable, there is
probably a looped cause-and-effect chain that is at least momentarily
running in a circle. Of course, there must have been a causal
cause-and-effect chain that led to this loop, so drawing the root cause at
the top, a chain that bends do the 

Re: [agi] Approximations of Knowledge

2008-06-22 Thread Mike Tintner
Steve:Most of my working career has been as a genuine consultant (and not just 
an unemployed programmer). I am typically hired by a major investor. My 
specialty is resurrecting projects that are in technological trouble. At the 
heart of the most troubled projects. I typically find either a born-again 
Christian or a PhD Chemist. These people make the same bad decisions from 
faith. The Christian's faith is that God wouldn't lead them SO astray, so 
abandoning the project would in effect be abandoning their faith in God - which 
of course leads straight to Hell. The Chemist has heard all of the stories of 
perseverance leading to breakthrough discoveries, and if you KNOW that the 
solution is there just waiting to be found, then just keep on plugging away. 
These both lead to projects that stumble on and on long after any sane person 
would have found another better way. Christians tend to make good programmers, 
but really awful project managers.

V. interesting. The thing that amazes me  -  I don't know whether this relates 
to your experience - is that so many AGI-ers don't seem to realise that if 
you're going to commit to a creative project, you must have at least one big, 
central creative idea to start with. Especially if investors are to be involved.

I find the pathologies of how would-be creatives fail to see this fascinating 
- you have possible examples above. Another obvious example is how many people 
think that they are being creative simply by going into a new area, even though 
they have no real new ideas or approaches to it.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-22 Thread Richard Loosemore


Abram

I am pressed for time right now, but just to let you know that, now that 
I am aware of your post, I will reply soon.  I think that many of your 
concerns are a result of seeing a different message in the paper than 
the one I intended.



Richard Loosemore



Abram Demski wrote:

To be honest, I am not completely satisfied with my conclusion on the
post you refer to. I'm not so sure now that the fundamental split
between logical/messy methods should occur at the line between perfect
 approximate methods. This is one type of messiness, but one only. I
think you are referring to a related but different messiness: not
knowing what kind of environment your AI is dealing with. Since we
don't know which kinds of models will fit best with the world, we
should (1) trust our intuitions to some extent, and (2) try things and
see how well they work. This is as Loosemore suggests.

On the other hand, I do not want to agree with Loosemore too strongly.
Mathematics and mathematical proof is a very important tool, and I
feel like he wants to reject it. His image of an AGI seems to be a
system built up out of totally dumb pieces, with intelligence emerging
unexpectedly. Mine is a system built out of somewhat smart pieces,
cooperating to build somewhat smarter pieces, and so on. Each piece
has provable smarts.

On Sat, Jun 21, 2008 at 6:54 AM, Jim Bromer [EMAIL PROTECTED] wrote:

I just read Abram Demski's comments about Loosemore's, Complex Systems,
Artificial Intelligence and Theoretical Psychology, at
http://dragonlogic-ai.blogspot.com/2008/03/i-recently-read-article-called-complex.html

I thought Abram's comments were interesting.  I just wanted to make a few
criticisms. One is that a logical or rational approach to AI does not
necessarily mean that it would be a fully constrained logical - mathematical
method.  My point of view is that if you use a logical or a rational method
with an unconstrained inductive system (open and not monotonic) then the
logical system will, for any likely use, act like a rational-non-rational
system no matter what you do.  So when, I for example, start thinking about
whether or not I will be able to use my SAT system (logical satisfiability)
for an AGI program, I am not thinking of an implementation of a pure
Aristotelian-Boolean system of knowledge.  The system I am currently
considering would use logic to study theories and theory-like relations that
refer to concepts about the natural universe and the universe of thought,
but without the expectation that those theories could ever constitute a
sound strictly logical or rational model of everything.  Such ideas are so
beyond the pale that I do not even consider the possibility to be worthy of
effort.  No one in his right mind would seriously think that he could write
a computer program that could explain everything perfectly without error.
If anyone seriously talked like that I would take it as a indication of some
significant psychological problem.



I also take it as a given that AI would suffer from the problem of
computational irreducibility if it's design goals were to completely
comprehend all complexity using only logical methods in the strictest sense.
However, many complex ideas may be simplified and these simplifications can
be used wisely in specific circumstances.  My belief is that many
interrelated layers of simplification, if they are used insightfully, can
effectively represent complex ideas that may not be completely understood,
just as we use insightful simplifications while trying to discuss something
that is completely understood, like intelligence.  My problem with
developing an AI program is not that I cannot figure out how to create
complex systems of  insightful simplifications, but that I do not know how
to develop a computer program capable of sufficient complexity to handle the
load that the system would produce.  So while I agree with Demski's
conclusion that, there is a way to salvage Loosemore's position,
...[through] shortcutting an irreducible computation by compromising,
allowing the system to produce less-than-perfect results, and, ...as we
tackle harder problems, the methods must become increasingly approximate, I
do not agree that the contemporary problem is with logic or with the
complexity of human knowledge. I feel that the major problem I have is that
writing a really really complicated computer program is really really
difficult.



The problem I have with people who talk about ANNs or probability nets as if
their paradigm of choice were the inevitable solution to complexity is that
they never discuss how their approach might actually handle complexity. Most
advocates of ANNs or probability deal with the problem of complexity as if
it were a problem that either does not exist or has already been solved by
whatever tired paradigm they are advocating.  I don't get that.



The major problem I have is that writing a really really complicated
computer program is really really difficult.  

Re: [agi] Approximations of Knowledge

2008-06-22 Thread J. Andrew Rogers


On Jun 22, 2008, at 1:37 PM, Steve Richfield wrote:
At the heart of the most troubled projects. I typically find either  
a born-again Christian or a PhD Chemist. These people make the same  
bad decisions from faith. The Christian's faith is that God wouldn't  
lead them SO astray, so abandoning the project would in effect be  
abandoning their faith in God - which of course leads straight to  
Hell. The Chemist has heard all of the stories of perseverance  
leading to breakthrough discoveries, and if you KNOW that the  
solution is there just waiting to be found, then just keep on  
plugging away. These both lead to projects that stumble on and on  
long after any sane person would have found another better way.  
Christians tend to make good programmers, but really awful project  
managers.



Somewhere in the world, there is a PhD chemist and a born-again  
Christian on another mailing list ...the project had hit a serious  
snag, and so the investors brought in a consultant that would explain  
why the project was broken by defectively reasoning about dubious  
generalizations he pulled out of his ass...



J. Andrew Rogers



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-22 Thread Jim Bromer
Abram,
I did not group you with probability buffs.  One of the errors I feel that 
writers make when their field is controversial is that they begin representing 
their own opinions from the vantage of countering critics.  Unfortunately, I am 
one of those writers, (or perhaps I am just projecting).  But my comment about 
the probability buffs wasn't directed toward you, I was just using it as an 
exemplar (of something or another).

Your comments seem to make sense to me although I don't know where you are 
heading.  You said: 
what should be hoped for is convergence to (nearly) correct models of (small 
parts of) the universe. So I suppose that rather than asking for meaning in a 
fuzzy logic, I should be asking for clear accounts of convergence 
properties...  

When you have to find a way to tie together components of knowledge together 
you typically have to achieve another kind of convergence.  Even if these 
'components' of knowledge are reliable, they cannot usually be converged easily 
due to the complexity that their interrelations with other kinds of knowledge 
(other 'components' of knowledge) will cause.

To follow up on what I previously said, if my logic program works it will mean 
that I can combine and test logical formulas of up to a few hundred distinct 
variables and find satisfiable values for these combinations in a relatively 
short period of time.  I think this will be an important method to test whether 
AI can be advanced by advancements in handling complexity even though some 
people do not feel that logical methods are appropriate to use on multiple 
source complexity.  As you seem to appreciate, logic can still be brought to to 
the field even though it is not a purely logical game that is to be played.

When I begin to develop some simple theories about a subject matter, I will 
typically create hundreds of minor variations concerning those theories over a 
period of time.  I cannot hold all those variations of the conjecture in 
consciousness at any one moment, but I do feel that they can come to mind in 
response to a set of conditions for which that particular set of variations was 
created for.  So while a simple logical theory (about some subject) may be 
expressible with only a few terms, when you examine all of the possible 
variations that can be brought into conscious consideration in response to a 
particular set of stimuli, I think you may find that the theories could be more 
accurately expressed using hundreds of distinct logical values.  

If this conjecture of mine turns out to be true, and if I can actually get my 
new logical methods to work, then I believe that this new range of logical 
methods may show whether advancements in complexity can make a difference to AI 
even if its application does not immediately result in human level of 
intelligence.

Jim Bromer


- Original Message 
From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, June 22, 2008 4:38:02 PM
Subject: Re: [agi] Approximations of Knowledge

Well, since you found my blog, you probably are grouping me somewhat
with the probability buffs. I have stated that I will not be
interested in any other fuzzy logic unless it is accompanied by a
careful account of the meaning of the numbers.

You have stated that it is unrealistic to expect a logical model to
reflect the world perfectly. The intuition behind this seems clear.
Instead, what should be hoped for is convergence to (nearly) correct
models of (small parts of) the universe. So I suppose that rather than
asking for meaning in a fuzzy logic, I should be asking for clear
accounts of convergence properties... but my intuition says that from
clear meaning, everything else follows.



  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-21 Thread Steve Richfield
Jim,

On 6/21/08, Jim Bromer [EMAIL PROTECTED] wrote:

   The major problem I have is that writing a really really complicated
 computer program is really really difficult.

The ONLY rational approach to this (that I know of) is to construct an
engine that develops and applies machine knowledge, wisdom, or whatever,
and NOT write code yourself that actually deals with articles of
knowledge/wisdom. That engine itself will still be a bit complex, so you
must write it in Visual Basic or .NET that provides a protected execution
environment, and NOT write it in C/C++ that makes it ever so easy to
inadvertently hide really nasty bugs.

REALLY complex systems may require multi-level interpreters, where a
low-level interpreter provides a pseudo-machine on which to program a really
smart high-level interpreter, on which you program your AGI. In ~1970 I
wrote an ALGOL/FORTRAN/BASIC compiler that ran in just 16K bytes this way.
At the bottom was a pseudo-computer whose primitives were fundamental to
compiling. That pseudo-machine was then fed a program to read BNF and make
compilers, which was then fed a BNF description of my compiler, with the
output being my compiler in pseudo-machine code. One feature of this
approach is that for anything to work, everything had to work, so once past
initial debugging, it worked perfectly! Contrast this with modern methods
that consume megabytes and never work quite right.

I wrote Dr, Eliza over the course of a year. I developed a daily workflow,
that started with answering my email while I woke up. Then came the most
creative work - module design. Then came programming, and finally came
debugging and testing. Obviously, you need a solid plan to start with to
complete such an effort. I spent another year developing my plan, an effort
that also involved going to computer conferences and bending the ear of
anyone who might have some applicable expertise. On a scale of complexity,
Dr. Eliza is MUCH simpler than many of the proposals being made here.
However, it does have one salient feature - it actually works in a
real-world useful way.

The more complex the software, the better the design must be, and the more
protected the execution must be. You can NEVER anticipate everything that
might go into a program, so they must fail ever so softly.

Much of what I have been challenging others on this form for came out of the
analysis and design of Dr. Eliza. The real world definitely has some
interesting structure, e.g. the figure 6 shape of cause-and-effect chains,
and that problems are a phenomenon that exists behind people's eyeballs and
NOT otherwise in the real world. Ignoring such things and diving in and
hoping that machine intelligence will resolve all (as many/most here seem to
believe) IMHO is a rookie error that leads nowhere useful.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-21 Thread Abram Demski
To be honest, I am not completely satisfied with my conclusion on the
post you refer to. I'm not so sure now that the fundamental split
between logical/messy methods should occur at the line between perfect
 approximate methods. This is one type of messiness, but one only. I
think you are referring to a related but different messiness: not
knowing what kind of environment your AI is dealing with. Since we
don't know which kinds of models will fit best with the world, we
should (1) trust our intuitions to some extent, and (2) try things and
see how well they work. This is as Loosemore suggests.

On the other hand, I do not want to agree with Loosemore too strongly.
Mathematics and mathematical proof is a very important tool, and I
feel like he wants to reject it. His image of an AGI seems to be a
system built up out of totally dumb pieces, with intelligence emerging
unexpectedly. Mine is a system built out of somewhat smart pieces,
cooperating to build somewhat smarter pieces, and so on. Each piece
has provable smarts.

On Sat, Jun 21, 2008 at 6:54 AM, Jim Bromer [EMAIL PROTECTED] wrote:
 I just read Abram Demski's comments about Loosemore's, Complex Systems,
 Artificial Intelligence and Theoretical Psychology, at
 http://dragonlogic-ai.blogspot.com/2008/03/i-recently-read-article-called-complex.html

 I thought Abram's comments were interesting.  I just wanted to make a few
 criticisms. One is that a logical or rational approach to AI does not
 necessarily mean that it would be a fully constrained logical - mathematical
 method.  My point of view is that if you use a logical or a rational method
 with an unconstrained inductive system (open and not monotonic) then the
 logical system will, for any likely use, act like a rational-non-rational
 system no matter what you do.  So when, I for example, start thinking about
 whether or not I will be able to use my SAT system (logical satisfiability)
 for an AGI program, I am not thinking of an implementation of a pure
 Aristotelian-Boolean system of knowledge.  The system I am currently
 considering would use logic to study theories and theory-like relations that
 refer to concepts about the natural universe and the universe of thought,
 but without the expectation that those theories could ever constitute a
 sound strictly logical or rational model of everything.  Such ideas are so
 beyond the pale that I do not even consider the possibility to be worthy of
 effort.  No one in his right mind would seriously think that he could write
 a computer program that could explain everything perfectly without error.
 If anyone seriously talked like that I would take it as a indication of some
 significant psychological problem.



 I also take it as a given that AI would suffer from the problem of
 computational irreducibility if it's design goals were to completely
 comprehend all complexity using only logical methods in the strictest sense.
 However, many complex ideas may be simplified and these simplifications can
 be used wisely in specific circumstances.  My belief is that many
 interrelated layers of simplification, if they are used insightfully, can
 effectively represent complex ideas that may not be completely understood,
 just as we use insightful simplifications while trying to discuss something
 that is completely understood, like intelligence.  My problem with
 developing an AI program is not that I cannot figure out how to create
 complex systems of  insightful simplifications, but that I do not know how
 to develop a computer program capable of sufficient complexity to handle the
 load that the system would produce.  So while I agree with Demski's
 conclusion that, there is a way to salvage Loosemore's position,
 ...[through] shortcutting an irreducible computation by compromising,
 allowing the system to produce less-than-perfect results, and, ...as we
 tackle harder problems, the methods must become increasingly approximate, I
 do not agree that the contemporary problem is with logic or with the
 complexity of human knowledge. I feel that the major problem I have is that
 writing a really really complicated computer program is really really
 difficult.



 The problem I have with people who talk about ANNs or probability nets as if
 their paradigm of choice were the inevitable solution to complexity is that
 they never discuss how their approach might actually handle complexity. Most
 advocates of ANNs or probability deal with the problem of complexity as if
 it were a problem that either does not exist or has already been solved by
 whatever tired paradigm they are advocating.  I don't get that.



 The major problem I have is that writing a really really complicated
 computer program is really really difficult.  But perhaps Abram's idea could
 be useful here.  As the program has to deal with more complicated
 collections of simple insights that concern some hard subject matter, it
 could tend to rely more on approximations to manage those complexes of

Re: [agi] Approximations of Knowledge

2008-06-21 Thread Steve Richfield
Abram,

A useful midpoint between views is to decide what knowledge must distill
down to, to be able to relate it together and do whatever you want to do. I
did this with Dr. Eliza and realized that I had to have a column in my DB
that contained what people typically say to indicate the presence of various
symptoms (of various cause-and-effect chain links). I now realize that
ignorance of the operation of various processes itself is also a condition
with its own symptoms, each with their own common expressions of
ignorance. OK, so just where was my column going to come from? This
information is NOT on the Internet, Wikipedia, etc., yet any expert can
rattle this information off in a heartbeat. The only obvious answer was to
have experts hand code this information. I am STILL listening to anyone who
claims to have another/better way, but I have yet to hear ANY other
functional proposal. Of course, this simple realization dooms all of the
several efforts now underway to mine the Internet and Wikipedia for
knowledge from which to solve problems, yet no one seems to be interested in
this simple gotcha, while these doomed efforts continue.

I believe that ALL of the ongoing disputes here on this forum are born of a
lack of analysis. While the contents of a knowledge base may be very complex
and interrelated, the structure of that DB should be relatively simple. This
discussion should start with a proposal for structure, and continue as the
flaws in that proposal are each identified and addressed.

Note in passing that the value of any problem solving system lies in its
ability to solve problems with an absolute minimum of information. Hence,
systems that require the most information are worth the least, and systems
that require all information are completely worthless. Dr. Eliza was
designed to operate right at the (currently believed to be) absolute
minimum.

I completely agree with others here that Dr. Eliza is NOT an AGI as
currently envisioned. However, for many of the projected problem-solving
functions of a future AGI, it appears to be absolutely unbeatable. People
need to either target other functionality for a *useful* future AGI, or else
develop designs that won't be predictably inferior to Dr. Eliza. For this,
they would do well to fully understand the operation of Dr. Eliza, which
should be no problem since it is conceptually pretty simple. Most of the
code goes to support speech I/O, the USENET interface, etc., and NOT its
core problem solving ability.

Steve Richfield
===
On 6/21/08, Abram Demski [EMAIL PROTECTED] wrote:

 To be honest, I am not completely satisfied with my conclusion on the
 post you refer to. I'm not so sure now that the fundamental split
 between logical/messy methods should occur at the line between perfect
  approximate methods. This is one type of messiness, but one only. I
 think you are referring to a related but different messiness: not
 knowing what kind of environment your AI is dealing with. Since we
 don't know which kinds of models will fit best with the world, we
 should (1) trust our intuitions to some extent, and (2) try things and
 see how well they work. This is as Loosemore suggests.

 On the other hand, I do not want to agree with Loosemore too strongly.
 Mathematics and mathematical proof is a very important tool, and I
 feel like he wants to reject it. His image of an AGI seems to be a
 system built up out of totally dumb pieces, with intelligence emerging
 unexpectedly. Mine is a system built out of somewhat smart pieces,
 cooperating to build somewhat smarter pieces, and so on. Each piece
 has provable smarts.

 On Sat, Jun 21, 2008 at 6:54 AM, Jim Bromer [EMAIL PROTECTED] wrote:
  I just read Abram Demski's comments about Loosemore's, Complex Systems,
  Artificial Intelligence and Theoretical Psychology, at
 
 http://dragonlogic-ai.blogspot.com/2008/03/i-recently-read-article-called-complex.html
 
  I thought Abram's comments were interesting.  I just wanted to make a few
  criticisms. One is that a logical or rational approach to AI does not
  necessarily mean that it would be a fully constrained logical -
 mathematical
  method.  My point of view is that if you use a logical or a rational
 method
  with an unconstrained inductive system (open and not monotonic) then the
  logical system will, for any likely use, act like a rational-non-rational
  system no matter what you do.  So when, I for example, start thinking
 about
  whether or not I will be able to use my SAT system (logical
 satisfiability)
  for an AGI program, I am not thinking of an implementation of a pure
  Aristotelian-Boolean system of knowledge.  The system I am currently
  considering would use logic to study theories and theory-like relations
 that
  refer to concepts about the natural universe and the universe of thought,
  but without the expectation that those theories could ever constitute a
  sound strictly logical or rational model of everything.  Such ideas are
 so