Re: Location of goal/purpose was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-17 Thread Terren Suydam

Will,

--- On Tue, 7/15/08, William Pearson [EMAIL PROTECTED] wrote:

 And I would also say of evolved systems. My fingers purpose
 could
 equally well be said to be for picking ticks out of the
 hair of my kin
 or for touch typing. E.g. why do I keep my fingernails
 short, so that
 they do not impede my typing. The purpose of gut bacteria
 is to help
 me digest my food. The purpose of part of my brain is to do
 differentiation of functions, because I have .

Actually, I agree with that, good point.  No matter what kind of system, 
designed or evolved, it has no intrinsic purpose, only a purpose we interpret.  
Purpose in other words is a property of the observer, not the observed.
 
 If you want to think of a good analogy for how emergent I
 want the
 system to be. Imagine someone came along to one of your
 life
 simulations and interfered with the simulation to give some
 more food
 to some of the entities that he liked the look of. This
 wouldn't be
 anything so crude as to specify the fitness or artificial
 breeding,
 but it would tilt the scales in the favour of entities that
 he liked
 all else being equal. Would this invalidate the whole
 simulation
 because he interfered and bought some of his purpose into
 it? If so, I
 don't see why.

No, it certainly wouldn't invalidate it. That is in fact what I would do to 
nudge the simulation along, provide it with incentives for developing in 
complexity, adding richness to the environment, creating problems to be solved. 
 
  So unless you believe that life was designed by God
 (in which case the purpose of life would lie in the mind of
 God), the purpose of the system is indeed intrinsic to the
 system itself.
 
 I think I would still say it didn't have a purpose. If
 I get your meaning right.
 
Will

Yes, that's what I would say (now). Here's the clearest way I can put it: 
purpose is a property of the observer - we interpret purpose in an observed 
system, and different observers can have different interpretations. However, we 
can sometimes talk about purpose in an objective sense in the observed system, 
*as if* it had an objective purpose, but only to the extent that we can relate 
it to the observed goals and behavior of the system (which, ultimately, are 
also interpreted). 

Which is another way of showing that when we examine concepts like goals, 
purpose, and behavior, we ultimately come back to the fact that these are 
mental constructions. They are our maps, not the territory. 

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: Location of goal/purpose was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-15 Thread William Pearson
2008/7/14 Terren Suydam [EMAIL PROTECTED]:

 Will,

 --- On Fri, 7/11/08, William Pearson [EMAIL PROTECTED] wrote:
 Purpose and goal are not intrinsic to systems.

 I agree this is true with designed systems.

And I would also say of evolved systems. My fingers purpose could
equally well be said to be for picking ticks out of the hair of my kin
or for touch typing. E.g. why do I keep my fingernails short, so that
they do not impede my typing. The purpose of gut bacteria is to help
me digest my food. The purpose of part of my brain is to do
differentiation of functions, because I have .

 The designed system is ultimately an extension of the designer's mind, 
 wherein lies the purpose.

Oddly enough that is what I want the system to be. Rather an extension
of my brain.

Of course, as you note, the system in question can serve multiple purposes, 
each of which lies in the mind of some other observer. The same is true of 
your system, even though its behavior may evolve. Your button is what tethers 
its purpose to your mind.


 On the other hand, we can create simulations in which purpose is truly 
 emergent. To support emergence our design must support large-scale, (global) 
 interactions of locally specified entities. Conway's Game of Life is an 
 example of such a system - what is its purpose?

To provide an interesting system for researchers to research cellular
automata? ;) I think I can see your point, It has no practical purpose
as such. Just a research purpose.

It certainly wasn't specified.

And neither am I specifying the purpose of mine! I'm quite happy to
hook up the button to something I press when I feel like it. I could
decide the purpose of the system was to learn and be good at
backgammon one day, in which case my presses would reflect that, or I
could decide the purpose of the system was to search the web.

If you want to think of a good analogy for how emergent I want the
system to be. Imagine someone came along to one of your life
simulations and interfered with the simulation to give some more food
to some of the entities that he liked the look of. This wouldn't be
anything so crude as to specify the fitness or artificial breeding,
but it would tilt the scales in the favour of entities that he liked
all else being equal. Would this invalidate the whole simulation
because he interfered and bought some of his purpose into it? If so, I
don't see why.

 The simplest answer is probably that it has none. But what if our design of 
 the local level was a little more interesting, such that at the global level, 
 we would eventually see self-sustaining entities that reproduced, competed 
 for resources, evolved, etc, and became more complex over a large number of 
 iterations?

Then the system itself still wouldn't have a practical purpose. For a
system Y to have a purpose, you have to have be able to say part X is
like it is for Y to perform its function. Internal state corresponding
to the entities might be said to have purpose, but not the system as a
whole.

 Whether that's possible is another matter, but assuming for the moment it 
 was, the purpose of that system could be defined in roughly the same way as 
 trying to define the purpose of life itself.

We have to be careful here.  What meaning of the word life are you using?

1) The biosphere + evolution
2) And individuals exsistance.

The first has no purpose. You can never look at the biosphere and
figure out what bits are for what in the grander scheme of things, or
ask yourself what mutations are likely to be thrown up to better
achieve its goal. That we have some self-regulation on the Gaian scale
is purely anthropic, biospheres without it would likely have driven
themselves to a state not able to support lives. An individual entity
has a purpose, though. So to that extent the purposeless can create
the purposeful.

 So unless you believe that life was designed by God (in which case the 
 purpose of life would lie in the mind of God), the purpose of the system is 
 indeed intrinsic to the system itself.


I think I would still say it didn't have a purpose. If I get your meaning right.

   Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: Location of goal/purpose was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-14 Thread Terren Suydam

Will,

--- On Fri, 7/11/08, William Pearson [EMAIL PROTECTED] wrote:
 Purpose and goal are not intrinsic to systems. 

I agree this is true with designed systems. The designed system is ultimately 
an extension of the designer's mind, wherein lies the purpose. Of course, as 
you note, the system in question can serve multiple purposes, each of which 
lies in the mind of some other observer. The same is true of your system, even 
though its behavior may evolve. Your button is what tethers its purpose to your 
mind.

On the other hand, we can create simulations in which purpose is truly 
emergent. To support emergence our design must support large-scale, (global) 
interactions of locally specified entities. Conway's Game of Life is an example 
of such a system - what is its purpose? It certainly wasn't specified. The 
simplest answer is probably that it has none. But what if our design of the 
local level was a little more interesting, such that at the global level, we 
would eventually see self-sustaining entities that reproduced, competed for 
resources, evolved, etc, and became more complex over a large number of 
iterations?  

Whether that's possible is another matter, but assuming for the moment it was, 
the purpose of that system could be defined in roughly the same way as trying 
to define the purpose of life itself. So unless you believe that life was 
designed by God (in which case the purpose of life would lie in the mind of 
God), the purpose of the system is indeed intrinsic to the system itself.

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: Formal proved code change vs experimental was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-09 Thread Steve Richfield
William,

On 7/7/08, William Pearson [EMAIL PROTECTED] wrote:

 2008/7/3 Steve Richfield [EMAIL PROTECTED]:
  William and Vladimir,
 
  IMHO this discussion is based entirely on the absence of any sort of
  interface spec. Such a spec is absolutely necessary for a large AGI
 project
  to ever succeed, and such a spec could (hopefully) be wrung out to at
 least
  avoid the worst of the potential traps.

 And if you want the interface to be upgradeable, or alterable what
 then? This conversation was based on the ability to change as much of
 the functional and learning parts of the systems as possible.


You should read the X.25 (original US version) or EDIFACT(newer/better
European version) EDI (Electronic Data Interchange) spec. There are several
free downloadable EDIFACT descriptions on-line, but the X.25 people want to
charge for EVERYTHING. This is the basis for most of the world's financial
systems. It is designed for smooth upgrading, even though some users on a
network do NOT have the latest spec or software. The specifics of various
presently defined message types aren't interesting in this context. However,
the way that they make highly complex networks gradually upgradable IS
interesting and I believe provides a usable roadmap for AGI development.
When looking at this, think of this as a prospective standard for RPC
(Remote Procedure Calls).

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Formal proved code change vs experimental was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-07 Thread William Pearson
2008/7/3 Steve Richfield [EMAIL PROTECTED]:
 William and Vladimir,

 IMHO this discussion is based entirely on the absence of any sort of
 interface spec. Such a spec is absolutely necessary for a large AGI project
 to ever succeed, and such a spec could (hopefully) be wrung out to at least
 avoid the worst of the potential traps.

And if you want the interface to be upgradeable, or alterable what
then? This conversation was based on the ability to change as much of
the functional and learning parts of the systems as possible.

 Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-04 Thread William Pearson
Terren,

 Remember when I said that a purpose is not the same thing
 as a goal?
 The purpose that the system might be said to have embedded
 is
 attempting to maximise a certain signal. This purpose
 presupposes no
 ontology. The fact that this signal is attached to a human
 means the
 system as a whole might form the goal to try and please the
 human. Or
 depending on what the human does it might develop other
 goals. Goals
 are not the same as purposes. Goals require the intentional
 stance,
 purposes the design.

 To the extent that purpose is not related to goals, it is a meaningless term. 
 In what possible sense is it worthwhile to talk about purpose if it doesn't 
 somehow impact what an intelligent actually does?

Does the following make sense? The purpose embedded within the system
will be try and make the system not decrease in its ability to receive
some abstract number.

The way I connect up the abstract number to the real world will the
govern what goals the system will likely develop (along with the
initial programming). That is there is some connection, but it is
tenuous and I don't have to specify an ontology.

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-04 Thread Terren Suydam


Will,

--- On Fri, 7/4/08, William Pearson [EMAIL PROTECTED] wrote:
 Does the following make sense? The purpose embedded within
 the system
 will be try and make the system not decrease in its ability
 to receive
 some abstract number.

 The way I connect up the abstract number to the real world
 will the
 govern what goals the system will likely develop (along
 with the
 initial programming). That is there is some connection, but
 it is
 tenuous and I don't have to specify an ontology.
 
   Will

I don't think I follow, but if I do, you're saying that the purpose of your 
system determines the goals of the system, which sounds like it's just 
semantics...

Terren


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread William Pearson
2008/7/3 Terren Suydam [EMAIL PROTECTED]:

 --- On Wed, 7/2/08, William Pearson [EMAIL PROTECTED] wrote:
 Evolution! I'm not saying your way can't work, just
 saying why I short
 cut where I do. Note a thing has a purpose if it is useful
 to apply
 the design stance* to it. There are two things to
 differentiate
 between, having a purpose and having some feedback of a
 purpose built
 in to the system.

 I don't believe evolution has a purpose. See Hod Lipson's TED talk for an 
 intriguing experiment in which replication is an inevitable outcome for a 
 system of building blocks explicitly set up in a random fashion. In other 
 words, purpose is emergent and ultimately in the mind of the beholder.

 See this article for an interesting take that increasing complexity is a 
 property of our laws of thermodynamics for non-equilibrium systems:

 http://biology.plosjournals.org/perlserv/?request=get-documentdoi=10.1371/journal.pbio.0050142ct=1

 In other words, Darwinian evolution is a special case of a more basic kind of 
 selection based on the laws of physics. This would deprive evolution of any 
 notion of purpose.


Evolution doesn't have a purpose, it creates things with purpose.
Where purpose means it is useful to apply the design stance on it,
e.g. ask what an eye on a frog is for.

 It is the second I meant, I should have been more specific.
 That is to
 apply the intentional stance to something successfully, I
 think a
 sense of its own purpose is needed to be embedded in that
 entity (this
 may only be a very crude approximation to the purpose we
 might assign
 something looking from an evolution eye view).

 Specifying a system's goals is limiting in the sense that we don't force the 
 agent to construct its own goals based on it own constructions. In other 
 words, this is just a different way of creating an ontology. It narrows the 
 domain of applicability. That may be exactly what you want to do, but for AGI 
 researchers, it is a mistake.

Remember when I said that a purpose is not the same thing as a goal?
The purpose that the system might be said to have embedded is
attempting to maximise a certain signal. This purpose presupposes no
ontology. The fact that this signal is attached to a human means the
system as a whole might form the goal to try and please the human. Or
depending on what the human does it might develop other goals. Goals
are not the same as purposes. Goals require the intentional stance,
purposes the design.

 Also your way we will end up with entities that may not be
 useful to
 us, which I think of as a negative for a long costly
 research program.

  Will

 Usefulness, again, is in the eye of the beholder. What appears not useful 
 today may be absolutely critical to an evolved descendant. This is a popular 
 explanation for how diversity emerges in nature, that a virus or bacteria 
 does some kind of horizontal transfer of its genes into a host genome, and 
 that gene becomes the basis for a future adaptation.

 When William Burroughs said language is a virus, he may have been more 
 correct than he knew. :-]



Possibly, but it will be another huge research topic to actually talk
to the things that evolve in the artificial universe, as they will
share very little background knowledge or ontology with us. I wish you
luck and will be interested to see where you go but the alife route is
just to slow and resource intensive for my liking.

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread William Pearson
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 12:59 AM, William Pearson [EMAIL PROTECTED] wrote:
 2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
 On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote:
 They would get less credit from the human supervisor. Let me expand on
 what I meant about the economic competition. Let us say vmprogram A
 makes a copy of itself, called A', with some purposeful tweaks, trying
 to make itself more efficient.

 So, this process performs optimization, A has a goal that it tries to
 express in form of A'. What is the problem with the algorithm that A
 uses? If this algorithm is stupid (in a technical sense), A' is worse
 than A and we can detect that. But this means that in fact, A' doesn't
 do its job and all the search pressure comes from program B that ranks
 the performance of A or A'. This
 generate-blindly-or-even-stupidly-and-check is a very inefficient
 algorithm. If, on the other hand, A happens to be a good program, then
 A' has a good change of being better than A, and anyway A has some
 understanding of what 'better' means, then what is the role of B? B
 adds almost no additional pressure, almost everything is done by A.

 How do you distribute the optimization pressure between generating
 programs (A) and checking programs (B)? Why do you need to do that at
 all, what is the benefit of generating and checking separately,
 compared to reliably generating from the same point (A alone)? If
 generation is not reliable enough, it probably won't be useful as
 optimization pressure anyway.


 The point of A and A' is that A', if better, may one day completely
 replace A. What is very good? Is 1 in 100 chances of making a mistake
 when generating its successor very good? If you want A' to be able to
 replace A, that is only 100 generations before you have made a bad
 mistake, and then where do you go? You have a bugged program and
 nothing to act as a watchdog.

 Also if A' is better than time A at time t, there is no guarantee that
 it will stay that way. Changes in the environment might favour one
 optimisation over another. If they both do things well, but different
 things then both A and A' might survive in different niches.


 I suggest you read ( http://sl4.org/wiki/KnowabilityOfFAI )
 If your program is a faulty optimizer that can't pump the reliability
 out of its optimization, you are doomed. I assume you argue that you
 don't want to include B in A, because a descendant of A may start to
 fail unexpectedly.

Nope. I don't include B in A because if A' is faulty it can cause
problems to whatever is in the same vmprogram as it, by overwriting
memory locations. A' being a separate vmprogram means it is insulated
from the B and A, and can only have limited impact on them.

I don't get what your obsession is with having things all be in one
program is anyway. Why is that better? I'll read knowability of FAI
again, but I have read it before and I don't think it will enlighten
me. I'll come back to the rest of your email once I have done that.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread Vladimir Nesov
On Thu, Jul 3, 2008 at 10:45 AM, William Pearson [EMAIL PROTECTED] wrote:

 Nope. I don't include B in A because if A' is faulty it can cause
 problems to whatever is in the same vmprogram as it, by overwriting
 memory locations. A' being a separate vmprogram means it is insulated
 from the B and A, and can only have limited impact on them.

Why does it need to be THIS faulty? If there is a known method to
prevent such faultiness, it can be reliably implemented in A, so that
all its descendants keep it, unless they are fairly sure it's not
needed anymore or there is a better alternative.

 I don't get what your obsession is with having things all be in one
 program is anyway. Why is that better? I'll read knowability of FAI
 again, but I have read it before and I don't think it will enlighten
 me. I'll come back to the rest of your email once I have done that.

It's not necessarily better, but I'm trying to make explicit in what
sense is it worse, that is what is the contribution of your framework
to the overall problem, if virtually the same thing can be done
without it.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread William Pearson
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 10:45 AM, William Pearson [EMAIL PROTECTED] wrote:

 Nope. I don't include B in A because if A' is faulty it can cause
 problems to whatever is in the same vmprogram as it, by overwriting
 memory locations. A' being a separate vmprogram means it is insulated
 from the B and A, and can only have limited impact on them.

 Why does it need to be THIS faulty? If there is a known method to
 prevent such faultiness, it can be reliably implemented in A, so that
 all its descendants keep it, unless they are fairly sure it's not
 needed anymore or there is a better alternative.

Because it is dealing with powerful stuff, when it gets it wrong it
goes wrong powerfully. You could lock the experimental code away in a
sand box inside A, but then it would be a separate program just one
inside A, but it might not be able to interact with programs in a way
that it can do its job.

There are two grades of faultiness. frequency and severity. You cannot
predict the severity of faults of arbitrary programs (and accepting
arbitrary programs from the outside world is something I want the
system to be able to do, after vetting etc).


 I don't get what your obsession is with having things all be in one
 program is anyway. Why is that better? I'll read knowability of FAI
 again, but I have read it before and I don't think it will enlighten
 me. I'll come back to the rest of your email once I have done that.

 It's not necessarily better, but I'm trying to make explicit in what
 sense is it worse, that is what is the contribution of your framework
 to the overall problem, if virtually the same thing can be done
 without it.


I'm not sure why you see this distinction as being important though. I
call the vmprograms separate because they have some protection around
them, but you could see them as all one big program if you wanted. The
instructions don't care whether we call the whole set of operations a
program or not. This, from one point of view, is true at least while
it is being simulated the whole VM is one program inside a larger
system.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread Vladimir Nesov
On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
 2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 10:45 AM, William Pearson [EMAIL PROTECTED] wrote:

 Nope. I don't include B in A because if A' is faulty it can cause
 problems to whatever is in the same vmprogram as it, by overwriting
 memory locations. A' being a separate vmprogram means it is insulated
 from the B and A, and can only have limited impact on them.

 Why does it need to be THIS faulty? If there is a known method to
 prevent such faultiness, it can be reliably implemented in A, so that
 all its descendants keep it, unless they are fairly sure it's not
 needed anymore or there is a better alternative.

 Because it is dealing with powerful stuff, when it gets it wrong it
 goes wrong powerfully. You could lock the experimental code away in a
 sand box inside A, but then it would be a separate program just one
 inside A, but it might not be able to interact with programs in a way
 that it can do its job.

 There are two grades of faultiness. frequency and severity. You cannot
 predict the severity of faults of arbitrary programs (and accepting
 arbitrary programs from the outside world is something I want the
 system to be able to do, after vetting etc).


You can't prove any interesting thing about an arbitrary program. It
can behave like a Friendly AI before February 25, 2317, and like a
Giant Cheesecake AI after that.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Formal proved code change vs experimental was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread William Pearson
Sorry about the long thread jack

2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
 Because it is dealing with powerful stuff, when it gets it wrong it
 goes wrong powerfully. You could lock the experimental code away in a
 sand box inside A, but then it would be a separate program just one
 inside A, but it might not be able to interact with programs in a way
 that it can do its job.

 There are two grades of faultiness. frequency and severity. You cannot
 predict the severity of faults of arbitrary programs (and accepting
 arbitrary programs from the outside world is something I want the
 system to be able to do, after vetting etc).


 You can't prove any interesting thing about an arbitrary program. It
 can behave like a Friendly AI before February 25, 2317, and like a
 Giant Cheesecake AI after that.

Whoever said you could? The whole system is designed around the
ability to take in or create arbitrary code, give it only minimal
access to other programs that it can earn and lock it out from that
ability when it does something bad.

By arbitrary code I don't mean random, I mean stuff that has not
formally been proven to have the properties you want. Formal proof is
too high a burden to place on things that you want to win. You might
not have the right axioms to prove the changes you want are right.

Instead you can see the internals of the system as a form of
continuous experiments. B is always testing a property of A or  A', if
at any time it stops having the property that B looks for then B flags
it as buggy.

I know this doesn't have the properties you would look for in a
friendly AI set to dominate the world. But I think it is similar to
the way humans work, and will be as chaotic and hard to grok as our
neural structure. So as likely as humans are to explode intelligently.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Formal proved code change vs experimental was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread Steve Richfield
William and Vladimir,

IMHO this discussion is based entirely on the absence of any sort of
interface spec. Such a spec is absolutely necessary for a large AGI project
to ever succeed, and such a spec could (hopefully) be wrung out to at least
avoid the worst of the potential traps.

For example: Suppose that new tasks stated the maximum CPU resources needed
to complete. Then, exceeding that would be cause for abnormal termination.
Of course, this doesn't cover logical failure.

More advanced example: Suppose that tasks provided a chain of
consciousness log as they execute, and a monitor watches that chain of
consciousness to see that new entries are repeatedly made, that they are
grammatically (machine grammar) correct, and verifies anything that is
easily verifiable.

Even more advanced example: Suppose that a new pseudo-machine were proposed,
whose fundamental code consisted of reasonable operations in the
logic-domain being exploited by the AGI. The interpreter for this
pseudo-machine could then employ countless internal checks as it operated,
and quickly determine when things went wrong.

Does anyone out there have something, anything in the way of an interface
spec to really start this discussion?

Steve Richfield
===
On 7/3/08, William Pearson [EMAIL PROTECTED] wrote:

 Sorry about the long thread jack

 2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
  On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED]
 wrote:
  Because it is dealing with powerful stuff, when it gets it wrong it
  goes wrong powerfully. You could lock the experimental code away in a
  sand box inside A, but then it would be a separate program just one
  inside A, but it might not be able to interact with programs in a way
  that it can do its job.
 
  There are two grades of faultiness. frequency and severity. You cannot
  predict the severity of faults of arbitrary programs (and accepting
  arbitrary programs from the outside world is something I want the
  system to be able to do, after vetting etc).
 
 
  You can't prove any interesting thing about an arbitrary program. It
  can behave like a Friendly AI before February 25, 2317, and like a
  Giant Cheesecake AI after that.
 
 Whoever said you could? The whole system is designed around the
 ability to take in or create arbitrary code, give it only minimal
 access to other programs that it can earn and lock it out from that
 ability when it does something bad.

 By arbitrary code I don't mean random, I mean stuff that has not
 formally been proven to have the properties you want. Formal proof is
 too high a burden to place on things that you want to win. You might
 not have the right axioms to prove the changes you want are right.

 Instead you can see the internals of the system as a form of
 continuous experiments. B is always testing a property of A or  A', if
 at any time it stops having the property that B looks for then B flags
 it as buggy.

 I know this doesn't have the properties you would look for in a
 friendly AI set to dominate the world. But I think it is similar to
 the way humans work, and will be as chaotic and hard to grok as our
 neural structure. So as likely as humans are to explode intelligently.

 Will Pearson


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread Terren Suydam
Will,

 Remember when I said that a purpose is not the same thing
 as a goal?
 The purpose that the system might be said to have embedded
 is
 attempting to maximise a certain signal. This purpose
 presupposes no
 ontology. The fact that this signal is attached to a human
 means the
 system as a whole might form the goal to try and please the
 human. Or
 depending on what the human does it might develop other
 goals. Goals
 are not the same as purposes. Goals require the intentional
 stance,
 purposes the design.

To the extent that purpose is not related to goals, it is a meaningless term. 
In what possible sense is it worthwhile to talk about purpose if it doesn't 
somehow impact what an intelligent actually does?

 Possibly, but it will be another huge research topic to
 actually talk
 to the things that evolve in the artificial universe, as
 they will
 share very little background knowledge or ontology with us.
 I wish you
 luck and will be interested to see where you go but the
 alife route is
 just to slow and resource intensive for my liking.
 
   Will

That is probably the most common criticism of the path I advocate and I 
certainly understand that, it's not for everyone. I will be very interested in 
your results as well, good luck!

Terren


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
Sorry about the late reply.

snip some stuff sorted out

2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
 On Tue, Jul 1, 2008 at 2:02 AM, William Pearson [EMAIL PROTECTED] wrote:
 2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:

 If internals are programmed by humans, why do you need automatic
 system to assess them? It would be useful if you needed to construct
 and test some kind of combination/setting automatically, but not if
 you just test manually-programmed systems. How does the assessment
 platform help in improving/accelerating the research?


 Because to be interesting the human specified programs need to be
 autogenous, as in Josh Storr Hall's terminology, which means
 self-building. Capable of altering the stuff they are made of. In this
 case machine code equivalent. So you need the human to assess the
 improvements the system makes, for whatever purpose the human wants
 the system to perform.


 Altering the stuff they are made of is instrumental to achieving the
 goal, and should be performed where necessary, but it doesn't happen,
 for example, with individual brains.

I think it happens at the level of neural structures. I.e. I think
neural structures control the development of other neural structures.

 (I was planning to do the next
 blog post on this theme, maybe tomorrow.) Do you mean to create
 population of altered initial designs and somehow select from them (I
 hope not, it is orthogonal to what modification is for in the first
 place)? Otherwise, why do you still need automated testing? Could you
 present a more detailed use case?


I'll try and give a fuller explanation later on.


 This means he needs to use a bunch more resources to get a singular
 useful system. Also the system might not do what he wants, but I don't
 think he minds about that.

 I'm allowing humans to design everything, just allowing the very low
 level to vary. Is this clearer?

 What do you mean by varying low level, especially in human-designed systems?

 The machine code the program is written in. Or in a java VM, the java 
 bytecode.


 This still didn't make this point clearer. You can't vary the
 semantics of low-level elements from which software is built, and if
 you don't modify the semantics, any other modification is superficial
 and irrelevant. If it's not quite 'software' that you are running, and
 it is able to survive the modification of lower level, using the terms
 like 'machine code' and 'software' is misleading. And in any case,
 it's not clear what this modification of low level achieves. You can't
 extract work from obfuscation and tinkering, the optimization comes
 from the lawful and consistent pressure in the same direction.


Okay let us clear things up. There are two things that need to be
designed, a computer architecture or virtual machine and programs that
form the initial set of programs within the system. Let us call the
internal programs vmprograms to avoid confusion.The vmprograms should
do all the heavy lifting (reasoning, creating new programs), this is
where the lawlful and consistent pressure would come from.

It is at source code of vmprograms that all needs to be changeable.

However the pressure will have to be somewhat experimental to be
powerful, you don't know what bugs a new program will have (if you are
doing a non-tight proof search through the space of programs). So the
point of the VM is to provide a safety net. If an experiment goes
awry, then the VM should allow each program to limit the bugged
vmprograms ability to affect it and eventually have it removed and the
resources applied to it.

Here is a toy scenario where the system needs this ability. *Note it
is not anything that is like a full AI but illustrates a facet of
something a full AI needs IMO*.

Consider a system trying to solve a task, e.g. navigate a maze, that
also has a number of different people out there giving helpful hints
on how to solve the maze. These hints are in the form of patches to
the vmprograms, e.g. changing the representation to 6-dimensional,
giving another patch language that has better patches. So the system
would make copies of the part of it to be patched and then patch it.
Now you could give a patch evaluation module to see which patch works
best, but what would happen if the module that implemented that
vmprogram wanted to be patched? My solution to the problem is to allow
the patch and non-patched version compete in the adhoc economic arena,
and see which one wins.

Does this clear things up?

 Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread Mike Tintner

Terren,

This is going too far. We can reconstruct to a considerable extent how 
humans think about problems - their conscious thoughts. Artists have been 
doing this reasonably well for hundreds of years. Science has so far avoided 
this, just as it avoided studying first the mind, with behaviourism,  then 
consciousness,. The main reason cognitive science and psychology have 
avoided stream-of-thought studies (apart from v. odd scientists like Jerome 
Singer) is that conscious thought about problems is v. different from the 
highly ordered, rational, thinking of programmed computers which cog. sci. 
uses as its basic paradigm. In fact, human thinking is fundamentally 
different - the conscious self has major difficulty concentrating on any 
problem for any length of time -  controlling the mind for more than a 
relatively few seconds, (as religious and humanistic thinkers have been 
telling us for thousands of years). Computers of course have perfect 
concentration forever. But that's because computers haven't had to deal with 
the type of problems that we do - the problematic problems where you don't, 
basically, know the answer, or how to find the answer, before you start.


For this kind of problem - which is actually what differentiates AGI from 
narrow AI - human thinking, creative as opposed to rational, stumbling, 
scatty, and freely associative, is actually IDEAL, for all its 
imperfections.


Yes, even if we extend our model of intelligence to include creative as well 
as rational thinking, it will still be an impoverished model, which may not 
include embodied thinking and perhaps other dimensions. But hey, we'll get 
there bit by bit, (just not, as we both agree, all at once in one five-year 
leap).


Terren: My points about the pitfalls of theorizing about intelligence apply 
to any and all humans who would attempt it - meaning, it's not necessary to 
characterize AI folks in one way or another. There are any number of aspects 
of intelligence we could highlight that pose a challenge to orthodox models 
of intelligence, but the bigger point is that there are fundamental limits 
to the ability of an intelligence to observe itself, in exactly the same way 
that an eye cannot see itself.


Consciousness and intelligence are present in every possible act of 
contemplation, so it is impossible to gain a vantage point of intelligence 
from outside of it. And that's exactly what we pretend to do when we 
conceptualize it within an artificial construct. This is the principle 
conceit of AI, that we can understand intelligence in an objective way, 
and model it well enough to reproduce by design.


Terren

--- On Tue, 7/1/08, Mike Tintner [EMAIL PROTECTED] wrote:


Terren:It's to make the larger point that we may be so
immersed in our own
conceptualizations of intelligence - particularly because
we live in our
models and draw on our own experience and introspection to
elaborate them -
that we may have tunnel vision about the possibilities for
better or
different models. Or, we may take for granted huge swaths
of what makes us
so smart, because it's so familiar, or below the radar
of our conscious
awareness, that it doesn't even occur to us to reflect
on it.

No 2 is more relevant - AI-ers don't seem to introspect
much. It's an irony
that the way AI-ers think when creating a program bears v.
little
resemblance to the way programmed computers think. (Matt
started to broach
this when he talked a while back of computer programming as
an art). But
AI-ers seem to have no interest in the discrepancy - which
again is ironic,
because analysing it would surely help them with their
programming as well
as the small matter of understanding how general
intelligence actually
works.

In fact  - I just looked - there is a longstanding field on
psychology of
programming. But it seems to share the deficiency of
psychology and
cognitive science generally which is : no study of the
stream-of-conscious-thought, especially conscious
problemsolving. The only
AI figure I know who did take some interest here was
Herbert Simon who
helped establish the use of verbal protocols.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread Vladimir Nesov
On Wed, Jul 2, 2008 at 2:48 PM, William Pearson [EMAIL PROTECTED] wrote:

 Okay let us clear things up. There are two things that need to be
 designed, a computer architecture or virtual machine and programs that
 form the initial set of programs within the system. Let us call the
 internal programs vmprograms to avoid confusion.The vmprograms should
 do all the heavy lifting (reasoning, creating new programs), this is
 where the lawlful and consistent pressure would come from.

 It is at source code of vmprograms that all needs to be changeable.

 However the pressure will have to be somewhat experimental to be
 powerful, you don't know what bugs a new program will have (if you are
 doing a non-tight proof search through the space of programs). So the
 point of the VM is to provide a safety net. If an experiment goes
 awry, then the VM should allow each program to limit the bugged
 vmprograms ability to affect it and eventually have it removed and the
 resources applied to it.

 Here is a toy scenario where the system needs this ability. *Note it
 is not anything that is like a full AI but illustrates a facet of
 something a full AI needs IMO*.

 Consider a system trying to solve a task, e.g. navigate a maze, that
 also has a number of different people out there giving helpful hints
 on how to solve the maze. These hints are in the form of patches to
 the vmprograms, e.g. changing the representation to 6-dimensional,
 giving another patch language that has better patches. So the system
 would make copies of the part of it to be patched and then patch it.
 Now you could give a patch evaluation module to see which patch works
 best, but what would happen if the module that implemented that
 vmprogram wanted to be patched? My solution to the problem is to allow
 the patch and non-patched version compete in the adhoc economic arena,
 and see which one wins.


What are the criteria that VM applies to vmprograms? If VM just
shortcircuits the economic pressure of agents to one another, it in
itself doesn't specify the direction of the search. The human economy
works to efficiently satisfy the goals of human beings who already
have their moral complexity. It propagates the decisions that
customers make, and fuels the allocation of resources based on these
decisions. Efficiency of economy is in efficiency of responding to
information about human goals. If your VM just feeds the decisions on
themselves, what stops the economy from focusing on efficiently doing
nothing?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread Terren Suydam

Mike, 

 This is going too far. We can reconstruct to a considerable
 extent how  humans think about problems - their conscious thoughts.

Why is it going too far?  I agree with you that we can reconstruct thinking, to 
a point. I notice you didn't say we can completely reconstruct how humans 
think about problems. Why not?

We have two primary means for understanding thought, and both are deeply flawed:

1. Introspection. Introspection allows us to analyze our mental life in a 
reflective way. This is possible because we are able to construct mental models 
of our mental models. There are three flaws with introspection. The first, 
least serious flaw is that we only have access to that which is present in our 
conscious awareness. We cannot introspect about unconscious processes, by 
definition.

This is a less serious objection because it's possible in practice to become 
conscious of phenomena there were previously unconscious, by developing our 
meta-mental-models. The question here becomes, is there any reason in principle 
that we cannot become conscious of *all* mental processes?

The second flaw is that, because introspection relies on the meta-models we 
need to make sense of our internal, mental life, the possibility is always 
present that our meta-models themselves are flawed. Worse, we have no way of 
knowing if they are wrong, because we often unconsciously, unwittingly deny 
evidence contrary to our conception of our own cognition, particularly when it 
runs counter to a positive account of our self-image.

Harvard's Project Implicit experiment 
(https://implicit.harvard.edu/implicit/) is a great way to demonstrate how we 
remain ignorant of deep, unconscious biases. Another example is how little we 
understand the contribution of emotion to our decision-making. Joseph Ledoux 
and others have shown fairly convincingly that emotion is a crucial part of 
human cognition, but most of us (particularly us men) deny the influence of 
emotion on our decision making.

The final flaw is the most serious. It says there is a fundamental limit to 
what introspection has access to. This is the an eye cannot see itself 
objection. But I can see my eyes in the mirror, says the devil's advocate. Of 
course, a mirror lets us observe a reflected version of our eye, and this is 
what introspection is. But we cannot see inside our own eye, directly - it's a 
fundamental limitation of any observational apparatus. Likewise, we cannot see 
inside the very act of model-simulation that enables introspection. 
Introspection relies on meta-models, or models about models, which are 
activated/simulated *after the fact*. We might observe ourselves in the act of 
introspection, but that is nothing but a meta-meta-model. Each introspectional 
act by necessity is one step (at least) removed from the direct, in-the-present 
flow of cognition. This means that we can never observe the cognitive machinery 
that enables the act of introspection itself.

And if you don't believe that introspection relies on cognitive machinery 
(maybe you're a dualist, but then why are you on an AI list? :-), ask yourself 
why we can't introspect about ourselves before a certain point in our young 
lives. It relies on a sufficiently sophisticated toolset that requires a 
certain amount of development before it is even possible.

2. Theory. Our theories of cognition are another path to understanding, and 
much of theory is directly or indirectly informed by introspection. When 
introspection fails (as in language acquisition), we rely completely on theory. 
The flaw with theory should be obvious. We have no direct way of testing 
theories of cognition, since we don't understand the connection between the 
mental and the physical. At best, we can use clever indirect means for 
generating evidence, and we usually have to accept the limits of reliability of 
subjective reports. 

Terren

--- On Wed, 7/2/08, Mike Tintner [EMAIL PROTECTED] wrote:
 Terren,
 
 This is going too far. We can reconstruct to a considerable
 extent how 
 humans think about problems - their conscious thoughts.
 Artists have been 
 doing this reasonably well for hundreds of years. Science
 has so far avoided 
 this, just as it avoided studying first the mind, with
 behaviourism,  then 
 consciousness,. The main reason cognitive science and
 psychology have 
 avoided stream-of-thought studies (apart from v. odd
 scientists like Jerome 
 Singer) is that conscious thought about problems is v.
 different from the 
 highly ordered, rational, thinking of programmed computers
 which cog. sci. 
 uses as its basic paradigm. In fact, human thinking is
 fundamentally 
 different - the conscious self has major difficulty
 concentrating on any 
 problem for any length of time -  controlling the mind for
 more than a 
 relatively few seconds, (as religious and humanistic
 thinkers have been 
 telling us for thousands of years). Computers of course
 have perfect 
 concentration forever. 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread Mike Tintner

Terren,

Obviously, as I indicated, I'm not suggesting that we can easily construct a 
total model of human cognition. But it ain't that hard to reconstruct 
reasonable and highly informative, if imperfect,  models of how humans 
consciously think about problems. As I said, artists have been doing a 
reasonable job for centuries. Shakespeare, who really started the inner 
monologue, was arguably the first scientist of consciousness. The kind of 
standard argument you give below - the eye can't look at itself - is 
actually nonsense. Your conscious, inner thoughts are not that different 
from your public, recordable dialogue. (Any decent transcript of thought, 
BTW, will give a v. good indication of the emotions involved).


We're not v. far apart here - we agree about the many dimensions of 
cognition, most of which are probably NOT directly accessible to the 
conscious mind. I'm just insisting on the massive importance of studying 
conscious thought. It was, as Crick said, ridiculous for science not to 
study consciousness - (it had a lot of rubbish arguments for not doing that, 
then) - it is equally ridiculous and in fact scientifically obscene not to 
study conscious thought. The consequences both for humans generally and AGI 
are enormous.



Terren: Mike,



This is going too far. We can reconstruct to a considerable
extent how  humans think about problems - their conscious thoughts.


Why is it going too far?  I agree with you that we can reconstruct 
thinking, to a point. I notice you didn't say we can completely 
reconstruct how humans think about problems. Why not?


We have two primary means for understanding thought, and both are deeply 
flawed:


1. Introspection. Introspection allows us to analyze our mental life in a 
reflective way. This is possible because we are able to construct mental 
models of our mental models. There are three flaws with introspection. The 
first, least serious flaw is that we only have access to that which is 
present in our conscious awareness. We cannot introspect about unconscious 
processes, by definition.


This is a less serious objection because it's possible in practice to 
become conscious of phenomena there were previously unconscious, by 
developing our meta-mental-models. The question here becomes, is there any 
reason in principle that we cannot become conscious of *all* mental 
processes?


The second flaw is that, because introspection relies on the meta-models 
we need to make sense of our internal, mental life, the possibility is 
always present that our meta-models themselves are flawed. Worse, we have 
no way of knowing if they are wrong, because we often unconsciously, 
unwittingly deny evidence contrary to our conception of our own cognition, 
particularly when it runs counter to a positive account of our self-image.


Harvard's Project Implicit experiment 
(https://implicit.harvard.edu/implicit/) is a great way to demonstrate how 
we remain ignorant of deep, unconscious biases. Another example is how 
little we understand the contribution of emotion to our decision-making. 
Joseph Ledoux and others have shown fairly convincingly that emotion is a 
crucial part of human cognition, but most of us (particularly us men) deny 
the influence of emotion on our decision making.


The final flaw is the most serious. It says there is a fundamental limit 
to what introspection has access to. This is the an eye cannot see 
itself objection. But I can see my eyes in the mirror, says the devil's 
advocate. Of course, a mirror lets us observe a reflected version of our 
eye, and this is what introspection is. But we cannot see inside our own 
eye, directly - it's a fundamental limitation of any observational 
apparatus. Likewise, we cannot see inside the very act of model-simulation 
that enables introspection. Introspection relies on meta-models, or 
models about models, which are activated/simulated *after the fact*. We 
might observe ourselves in the act of introspection, but that is nothing 
but a meta-meta-model. Each introspectional act by necessity is one step 
(at least) removed from the direct, in-the-present flow of cognition. This 
means that we can never observe the cognitive machinery that enables the 
act of introspection itself.


And if you don't believe that introspection relies on cognitive machinery 
(maybe you're a dualist, but then why are you on an AI list? :-), ask 
yourself why we can't introspect about ourselves before a certain point in 
our young lives. It relies on a sufficiently sophisticated toolset that 
requires a certain amount of development before it is even possible.


2. Theory. Our theories of cognition are another path to understanding, 
and much of theory is directly or indirectly informed by introspection. 
When introspection fails (as in language acquisition), we rely completely 
on theory. The flaw with theory should be obvious. We have no direct way 
of testing theories of cognition, since we don't understand the 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
2008/7/2 Terren Suydam [EMAIL PROTECTED]:

 Mike,

 This is going too far. We can reconstruct to a considerable
 extent how  humans think about problems - their conscious thoughts.

 Why is it going too far?  I agree with you that we can reconstruct thinking, 
 to a point. I notice you didn't say we can completely reconstruct how humans 
 think about problems. Why not?

 We have two primary means for understanding thought, and both are deeply 
 flawed:

 1. Introspection. Introspection allows us to analyze our mental life in a 
 reflective way. This is possible because we are able to construct mental 
 models of our mental models. There are three flaws with introspection. The 
 first, least serious flaw is that we only have access to that which is 
 present in our conscious awareness. We cannot introspect about unconscious 
 processes, by definition.

 This is a less serious objection because it's possible in practice to become 
 conscious of phenomena there were previously unconscious, by developing our 
 meta-mental-models. The question here becomes, is there any reason in 
 principle that we cannot become conscious of *all* mental processes?

 The second flaw is that, because introspection relies on the meta-models we 
 need to make sense of our internal, mental life, the possibility is always 
 present that our meta-models themselves are flawed. Worse, we have no way of 
 knowing if they are wrong, because we often unconsciously, unwittingly deny 
 evidence contrary to our conception of our own cognition, particularly when 
 it runs counter to a positive account of our self-image.

 Harvard's Project Implicit experiment 
 (https://implicit.harvard.edu/implicit/) is a great way to demonstrate how we 
 remain ignorant of deep, unconscious biases. Another example is how little we 
 understand the contribution of emotion to our decision-making. Joseph Ledoux 
 and others have shown fairly convincingly that emotion is a crucial part of 
 human cognition, but most of us (particularly us men) deny the influence of 
 emotion on our decision making.

 The final flaw is the most serious. It says there is a fundamental limit to 
 what introspection has access to. This is the an eye cannot see itself 
 objection. But I can see my eyes in the mirror, says the devil's advocate. Of 
 course, a mirror lets us observe a reflected version of our eye, and this is 
 what introspection is. But we cannot see inside our own eye, directly - it's 
 a fundamental limitation of any observational apparatus. Likewise, we cannot 
 see inside the very act of model-simulation that enables introspection. 
 Introspection relies on meta-models, or models about models, which are 
 activated/simulated *after the fact*. We might observe ourselves in the act 
 of introspection, but that is nothing but a meta-meta-model. Each 
 introspectional act by necessity is one step (at least) removed from the 
 direct, in-the-present flow of cognition. This means that we can never 
 observe the cognitive machinery that enables the act of introspection itself.

 And if you don't believe that introspection relies on cognitive machinery 
 (maybe you're a dualist, but then why are you on an AI list? :-), ask 
 yourself why we can't introspect about ourselves before a certain point in 
 our young lives. It relies on a sufficiently sophisticated toolset that 
 requires a certain amount of development before it is even possible.

 2. Theory. Our theories of cognition are another path to understanding, and 
 much of theory is directly or indirectly informed by introspection. When 
 introspection fails (as in language acquisition), we rely completely on 
 theory. The flaw with theory should be obvious. We have no direct way of 
 testing theories of cognition, since we don't understand the connection 
 between the mental and the physical. At best, we can use clever indirect 
 means for generating evidence, and we usually have to accept the limits of 
 reliability of subjective reports.


My plan is go for 3) Usefulness. Cognition is useful from an
evolutionary point of view, if we try to create systems that are
useful in the same situations (social, building world models), then we
might one day stumble upon cognition.

To expand on usefulness in social contexts, you have to ask yourself
what the point of language is, why is it useful in an evolutionary
setting. One thing the point of language is not, is fooling humans
that you are human, which makes me annoyed at all the chatbots that
get coverage as AI.

I'll write more on this later.

This by the way is why I don't self-organise purpose. I am pretty sure
a specified purpose (not the same thing as a goal, at all) is needed
for an intelligence.

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread Terren Suydam

Mike,

That's a rather weak reply. I'm open to the possibility that my ideas are 
incorrect or need improvement, but calling what I said nonsense without further 
justification is just hand waving.

Unless you mean this as your justification:
Your conscious, inner thoughts are not that different from your public, 
recordable dialogue.

How this amounts to an objection to my points about introspection is beyond 
me... care to elaborate?

Terren

--- On Wed, 7/2/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Terren,
 
 Obviously, as I indicated, I'm not suggesting that we
 can easily construct a 
 total model of human cognition. But it ain't that hard
 to reconstruct 
 reasonable and highly informative, if imperfect,  models of
 how humans 
 consciously think about problems. As I said, artists have
 been doing a 
 reasonable job for centuries. Shakespeare, who really
 started the inner 
 monologue, was arguably the first scientist of
 consciousness. The kind of 
 standard argument you give below - the eye can't look
 at itself - is 
 actually nonsense. Your conscious, inner thoughts are not
 that different 
 from your public, recordable dialogue. (Any decent
 transcript of thought, 
 BTW, will give a v. good indication of the emotions
 involved).
 
 We're not v. far apart here - we agree about the many
 dimensions of 
 cognition, most of which are probably NOT directly
 accessible to the 
 conscious mind. I'm just insisting on the massive
 importance of studying 
 conscious thought. It was, as Crick said,
 ridiculous for science not to 
 study consciousness - (it had a lot of rubbish arguments
 for not doing that, 
 then) - it is equally ridiculous and in fact scientifically
 obscene not to 
 study conscious thought. The consequences both for humans
 generally and AGI 
 are enormous.
 
 
 Terren: Mike,
 
  This is going too far. We can reconstruct to a
 considerable
  extent how  humans think about problems - their
 conscious thoughts.
 
  Why is it going too far?  I agree with you that we can
 reconstruct 
  thinking, to a point. I notice you didn't say
 we can completely 
  reconstruct how humans think about problems. Why
 not?
 
  We have two primary means for understanding thought,
 and both are deeply 
  flawed:
 
  1. Introspection. Introspection allows us to analyze
 our mental life in a 
  reflective way. This is possible because we are able
 to construct mental 
  models of our mental models. There are three flaws
 with introspection. The 
  first, least serious flaw is that we only have access
 to that which is 
  present in our conscious awareness. We cannot
 introspect about unconscious 
  processes, by definition.
 
  This is a less serious objection because it's
 possible in practice to 
  become conscious of phenomena there were previously
 unconscious, by 
  developing our meta-mental-models. The question here
 becomes, is there any 
  reason in principle that we cannot become conscious of
 *all* mental 
  processes?
 
  The second flaw is that, because introspection relies
 on the meta-models 
  we need to make sense of our internal, mental life,
 the possibility is 
  always present that our meta-models themselves are
 flawed. Worse, we have 
  no way of knowing if they are wrong, because we often
 unconsciously, 
  unwittingly deny evidence contrary to our conception
 of our own cognition, 
  particularly when it runs counter to a positive
 account of our self-image.
 
  Harvard's Project Implicit experiment 
  (https://implicit.harvard.edu/implicit/) is a great
 way to demonstrate how 
  we remain ignorant of deep, unconscious biases.
 Another example is how 
  little we understand the contribution of emotion to
 our decision-making. 
  Joseph Ledoux and others have shown fairly
 convincingly that emotion is a 
  crucial part of human cognition, but most of us
 (particularly us men) deny 
  the influence of emotion on our decision making.
 
  The final flaw is the most serious. It says there is a
 fundamental limit 
  to what introspection has access to. This is the
 an eye cannot see 
  itself objection. But I can see my eyes in the
 mirror, says the devil's 
  advocate. Of course, a mirror lets us observe a
 reflected version of our 
  eye, and this is what introspection is. But we cannot
 see inside our own 
  eye, directly - it's a fundamental limitation of
 any observational 
  apparatus. Likewise, we cannot see inside the very act
 of model-simulation 
  that enables introspection. Introspection relies on
 meta-models, or 
  models about models, which are
 activated/simulated *after the fact*. We 
  might observe ourselves in the act of introspection,
 but that is nothing 
  but a meta-meta-model. Each introspectional act by
 necessity is one step 
  (at least) removed from the direct, in-the-present
 flow of cognition. This 
  means that we can never observe the cognitive
 machinery that enables the 
  act of introspection itself.
 
  And if you don't believe that introspection 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread Terren Suydam

Will,

 My plan is go for 3) Usefulness. Cognition is useful from
 an
 evolutionary point of view, if we try to create systems
 that are
 useful in the same situations (social, building world
 models), then we
 might one day stumble upon cognition.

Sure, that's a valid approach for creating something we might call intelligent. 
My diatribe there was about human thought (the only kind we know of), not 
cognition in general.
 
 This by the way is why I don't self-organise purpose. I
 am pretty sure
 a specified purpose (not the same thing as a goal, at all)
 is needed
 for an intelligence.
 
   Will

OK, then who or what specified the purpose of the first life forms? It's that 
intuition of yours that leads directly to Intelligent Design. As an aside, I 
love the irony that AI researchers who try to design intelligence are 
unwittingly giving ammunition to Intelligent Design arguments. 

Terren


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
 On Wed, Jul 2, 2008 at 2:48 PM, William Pearson [EMAIL PROTECTED] wrote:

 Okay let us clear things up. There are two things that need to be
 designed, a computer architecture or virtual machine and programs that
 form the initial set of programs within the system. Let us call the
 internal programs vmprograms to avoid confusion.The vmprograms should
 do all the heavy lifting (reasoning, creating new programs), this is
 where the lawlful and consistent pressure would come from.

 It is at source code of vmprograms that all needs to be changeable.

 However the pressure will have to be somewhat experimental to be
 powerful, you don't know what bugs a new program will have (if you are
 doing a non-tight proof search through the space of programs). So the
 point of the VM is to provide a safety net. If an experiment goes
 awry, then the VM should allow each program to limit the bugged
 vmprograms ability to affect it and eventually have it removed and the
 resources applied to it.

 Here is a toy scenario where the system needs this ability. *Note it
 is not anything that is like a full AI but illustrates a facet of
 something a full AI needs IMO*.

 Consider a system trying to solve a task, e.g. navigate a maze, that
 also has a number of different people out there giving helpful hints
 on how to solve the maze. These hints are in the form of patches to
 the vmprograms, e.g. changing the representation to 6-dimensional,
 giving another patch language that has better patches. So the system
 would make copies of the part of it to be patched and then patch it.
 Now you could give a patch evaluation module to see which patch works
 best, but what would happen if the module that implemented that
 vmprogram wanted to be patched? My solution to the problem is to allow
 the patch and non-patched version compete in the adhoc economic arena,
 and see which one wins.


 What are the criteria that VM applies to vmprograms? If VM just
 shortcircuits the economic pressure of agents to one another, it in
 itself doesn't specify the direction of the search. The human economy
 works to efficiently satisfy the goals of human beings who already
 have their moral complexity. It propagates the decisions that
 customers make, and fuels the allocation of resources based on these
 decisions. Efficiency of economy is in efficiency of responding to
 information about human goals. If your VM just feeds the decisions on
 themselves, what stops the economy from focusing on efficiently doing
 nothing?

They would get less credit from the human supervisor. Let me expand on
what I meant about the economic competition. Let us say vmprogram A
makes a copy of itself, called A', with some purposeful tweaks, trying
to make itself more efficient.

A' has some bugs such that the human notices something wrong with the
system, she gives less credit on average each time A' is helping out
rather than A.

Now A and A' both have to bid for the chance to help program B which
is closer to the outputting (due to the programming of B), B pays a
proportion of the credit it gets back. Now the credit B gets will be
lower when A' is helping, than when A is helping. So A' will get less
in general than A. There are a few scenarios, ordered from quickest
acting to slowest.

1 ) B keeps records of who helps him and sees that A' is not helping
him as well as the average, so no longer lets A' bid. A' resources get
used when it can't keep up bidding for them.
2) A' continues bidding a lot, to outbid A. However the average amount
A' gets is less than it gets back from B. A' bankrupts itself and
other programs use its resources.
3) A' doesn't manage to outbid A' after a fair few trials, so gets the
same fate as it does in scenario 1)

If you start with a bunch of stupid vmprograms, you won't get
anywhere. It can just go to nothingness, you do have to design them
fairly well, just in such a way that that design can change later.

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread Abram Demski
How do you assign credit to programs that are good at generating good
children? Particularly, could a program specialize in this, so that it
doesn't do anything useful directly but always through making highly
useful children?

On Wed, Jul 2, 2008 at 1:09 PM, William Pearson [EMAIL PROTECTED] wrote:
 2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
 On Wed, Jul 2, 2008 at 2:48 PM, William Pearson [EMAIL PROTECTED] wrote:

 Okay let us clear things up. There are two things that need to be
 designed, a computer architecture or virtual machine and programs that
 form the initial set of programs within the system. Let us call the
 internal programs vmprograms to avoid confusion.The vmprograms should
 do all the heavy lifting (reasoning, creating new programs), this is
 where the lawlful and consistent pressure would come from.

 It is at source code of vmprograms that all needs to be changeable.

 However the pressure will have to be somewhat experimental to be
 powerful, you don't know what bugs a new program will have (if you are
 doing a non-tight proof search through the space of programs). So the
 point of the VM is to provide a safety net. If an experiment goes
 awry, then the VM should allow each program to limit the bugged
 vmprograms ability to affect it and eventually have it removed and the
 resources applied to it.

 Here is a toy scenario where the system needs this ability. *Note it
 is not anything that is like a full AI but illustrates a facet of
 something a full AI needs IMO*.

 Consider a system trying to solve a task, e.g. navigate a maze, that
 also has a number of different people out there giving helpful hints
 on how to solve the maze. These hints are in the form of patches to
 the vmprograms, e.g. changing the representation to 6-dimensional,
 giving another patch language that has better patches. So the system
 would make copies of the part of it to be patched and then patch it.
 Now you could give a patch evaluation module to see which patch works
 best, but what would happen if the module that implemented that
 vmprogram wanted to be patched? My solution to the problem is to allow
 the patch and non-patched version compete in the adhoc economic arena,
 and see which one wins.


 What are the criteria that VM applies to vmprograms? If VM just
 shortcircuits the economic pressure of agents to one another, it in
 itself doesn't specify the direction of the search. The human economy
 works to efficiently satisfy the goals of human beings who already
 have their moral complexity. It propagates the decisions that
 customers make, and fuels the allocation of resources based on these
 decisions. Efficiency of economy is in efficiency of responding to
 information about human goals. If your VM just feeds the decisions on
 themselves, what stops the economy from focusing on efficiently doing
 nothing?

 They would get less credit from the human supervisor. Let me expand on
 what I meant about the economic competition. Let us say vmprogram A
 makes a copy of itself, called A', with some purposeful tweaks, trying
 to make itself more efficient.

 A' has some bugs such that the human notices something wrong with the
 system, she gives less credit on average each time A' is helping out
 rather than A.

 Now A and A' both have to bid for the chance to help program B which
 is closer to the outputting (due to the programming of B), B pays a
 proportion of the credit it gets back. Now the credit B gets will be
 lower when A' is helping, than when A is helping. So A' will get less
 in general than A. There are a few scenarios, ordered from quickest
 acting to slowest.

 1 ) B keeps records of who helps him and sees that A' is not helping
 him as well as the average, so no longer lets A' bid. A' resources get
 used when it can't keep up bidding for them.
 2) A' continues bidding a lot, to outbid A. However the average amount
 A' gets is less than it gets back from B. A' bankrupts itself and
 other programs use its resources.
 3) A' doesn't manage to outbid A' after a fair few trials, so gets the
 same fate as it does in scenario 1)

 If you start with a bunch of stupid vmprograms, you won't get
 anywhere. It can just go to nothingness, you do have to design them
 fairly well, just in such a way that that design can change later.

  Will


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
2008/7/2 Abram Demski [EMAIL PROTECTED]:
 How do you assign credit to programs that are good at generating good
 children?

I never directly assign credit, apart from the first stage. The rest
of the credit assignment is handled by the vmprograms, er,
programming.


 Particularly, could a program specialize in this, so that it
 doesn't do anything useful directly but always through making highly
 useful children?

As the parent controls the code of its offspring, it could embed code
in its offspring to pass a small portion of the credit they get back
to it. They would have to be careful how much to skim off so the
offspring could still thrive.

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
 On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote:
 They would get less credit from the human supervisor. Let me expand on
 what I meant about the economic competition. Let us say vmprogram A
 makes a copy of itself, called A', with some purposeful tweaks, trying
 to make itself more efficient.

 So, this process performs optimization, A has a goal that it tries to
 express in form of A'. What is the problem with the algorithm that A
 uses? If this algorithm is stupid (in a technical sense), A' is worse
 than A and we can detect that. But this means that in fact, A' doesn't
 do its job and all the search pressure comes from program B that ranks
 the performance of A or A'. This
 generate-blindly-or-even-stupidly-and-check is a very inefficient
 algorithm. If, on the other hand, A happens to be a good program, then
 A' has a good change of being better than A, and anyway A has some
 understanding of what 'better' means, then what is the role of B? B
 adds almost no additional pressure, almost everything is done by A.

 How do you distribute the optimization pressure between generating
 programs (A) and checking programs (B)? Why do you need to do that at
 all, what is the benefit of generating and checking separately,
 compared to reliably generating from the same point (A alone)? If
 generation is not reliable enough, it probably won't be useful as
 optimization pressure anyway.


The point of A and A' is that A', if better, may one day completely
replace A. What is very good? Is 1 in 100 chances of making a mistake
when generating its successor very good? If you want A' to be able to
replace A, that is only 100 generations before you have made a bad
mistake, and then where do you go? You have a bugged program and
nothing to act as a watchdog.

Also if A' is better than time A at time t, there is no guarantee that
it will stay that way. Changes in the environment might favour one
optimisation over another. If they both do things well, but different
things then both A and A' might survive in different niches.

I would also be interested in why you think we have programmers and
system testers in the real world.

Also worth noting is most optimisation will be done inside the
vmprograms, this process is only for very fundamental code changes,
e.g. changing representations, biases, ways of creating offspring.
Things that cannot be tested easily any other way. I'm quite happy for
it to be slow, because this process is not where the majority of
quickness of the system will rest. But this process is needed for
intelligence else you will be stuck with certain ways of doing things
when they are not useful.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread Vladimir Nesov
On Thu, Jul 3, 2008 at 12:59 AM, William Pearson [EMAIL PROTECTED] wrote:
 2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
 On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote:
 They would get less credit from the human supervisor. Let me expand on
 what I meant about the economic competition. Let us say vmprogram A
 makes a copy of itself, called A', with some purposeful tweaks, trying
 to make itself more efficient.

 So, this process performs optimization, A has a goal that it tries to
 express in form of A'. What is the problem with the algorithm that A
 uses? If this algorithm is stupid (in a technical sense), A' is worse
 than A and we can detect that. But this means that in fact, A' doesn't
 do its job and all the search pressure comes from program B that ranks
 the performance of A or A'. This
 generate-blindly-or-even-stupidly-and-check is a very inefficient
 algorithm. If, on the other hand, A happens to be a good program, then
 A' has a good change of being better than A, and anyway A has some
 understanding of what 'better' means, then what is the role of B? B
 adds almost no additional pressure, almost everything is done by A.

 How do you distribute the optimization pressure between generating
 programs (A) and checking programs (B)? Why do you need to do that at
 all, what is the benefit of generating and checking separately,
 compared to reliably generating from the same point (A alone)? If
 generation is not reliable enough, it probably won't be useful as
 optimization pressure anyway.


 The point of A and A' is that A', if better, may one day completely
 replace A. What is very good? Is 1 in 100 chances of making a mistake
 when generating its successor very good? If you want A' to be able to
 replace A, that is only 100 generations before you have made a bad
 mistake, and then where do you go? You have a bugged program and
 nothing to act as a watchdog.

 Also if A' is better than time A at time t, there is no guarantee that
 it will stay that way. Changes in the environment might favour one
 optimisation over another. If they both do things well, but different
 things then both A and A' might survive in different niches.


I suggest you read ( http://sl4.org/wiki/KnowabilityOfFAI )
If your program is a faulty optimizer that can't pump the reliability
out of its optimization, you are doomed. I assume you argue that you
don't want to include B in A, because a descendant of A may start to
fail unexpectedly. But if you reliably copy B inside each of A's
descendants, this particular problem won't appear. The main question
is: what is the difference between just trying to build a
self-improving program A and doing so inside your testing environment.
If there is no difference, you add nothing by your framework. If there
is, it would be good to find out what it is.


 I would also be interested in why you think we have programmers and
 system testers in the real world.


Testing that doesn't even depend on program's internal structure and
only checks its output (as in your economy setup) isn't nearly good
enough. Testing that you're referring to in this post (activity
performed by humans, based on specific implementation and
understanding of high-level specification that says what algorithm
should do) has very little to do with testing that you propose in the
framework (fixed program B). Anyway, you should answer on that
question yourself: what is the essence of useful activity that is
performed by software testing and that you capture in your framework.
Arguing that there must be some such essence and that it must transfer
to your setting isn't reliable.


 Also worth noting is most optimisation will be done inside the
 vmprograms, this process is only for very fundamental code changes,
 e.g. changing representations, biases, ways of creating offspring.
 Things that cannot be tested easily any other way. I'm quite happy for
 it to be slow, because this process is not where the majority of
 quickness of the system will rest. But this process is needed for
 intelligence else you will be stuck with certain ways of doing things
 when they are not useful.


Being stuck in development is a problem of search process, it can as
well be a problem of process A that should be resolved from within A.


-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Savants and user-interfaces [was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread Linas Vepstas
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
 On Tue, Jul 1, 2008 at 8:31 AM, Linas Vepstas [EMAIL PROTECTED] wrote:

 Why binary?

 I once skimmed a biography of Ramanujan, he started
 multiplying numbers in his head as a pre-teen. I suspect
 it was grindingly boring, but given the surroundings, might
 have been the most fun thing he could think of.   If you're
 autistic, then focusing obsessively on some task might
 be a great way to pass the time, but if you're more or less
 normal, I doubt you'll get very far with obsessive-compulsive
 self-training -- and that's the problem, isn't it?


 If the signals have properties of their own, I'm afraid they will
 start interfering with each other, which won't allow the circuit to
 execute in real time. Binary signals, on the other hand, can be
 encoded by the activation of nodes of the circuit, active/inactive. If
 you have an AND gate that leads from symbols S1 and S2 to S3, you
 learn to remember S3 only when you see both S1 and S2

What are you trying to accomplish here? I don't see where
you are trying to go with this.

I don't think a human can consciously train one or two neurons
to do something, we train millions at a time. -- I'm guessing
savants only employ a few tens of million neurons (give or take a
few orders of magnitude) -- to do their stuff.

Still, an array of 1K by 1K electrodes is well within current
technology, we just don't know where to hook this up to,
with the exception of simple motor areas, retina, and bit
of the auditory circuits.

--linas


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Savants and user-interfaces [was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread Vladimir Nesov
On Tue, Jul 1, 2008 at 10:02 AM, Linas Vepstas [EMAIL PROTECTED] wrote:

 What are you trying to accomplish here? I don't see where
 you are trying to go with this.

 I don't think a human can consciously train one or two neurons
 to do something, we train millions at a time. -- I'm guessing
 savants only employ a few tens of million neurons (give or take a
 few orders of magnitude) -- to do their stuff.

 Still, an array of 1K by 1K electrodes is well within current
 technology, we just don't know where to hook this up to,
 with the exception of simple motor areas, retina, and bit
 of the auditory circuits.


Certainly nothing to do with individual neurons. Basically, it's
possible to train a finite state automaton in the mind through
association. You see a certain combination of properties, you think
the symbol that describes this combination. If such automaton is
trained not just to handle natural data (such as language), but to a
specifically designed circuit plan, it'll probably be possible to use
it as a directly accessible 'add-on' to the brain that implements
specific simple function efficiently, such as some operation with
numbers using a clever algorithm in a way alien to normal deliberative
learning. You don't learn to perform a task, but to execute individual
steps of an algorithm that performs a task.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Savants and user-interfaces [was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread Brad Paulsen
I was nearly kicked out of school in seventh grade for coming up with a method 
of manipulating (multiplying, dividing) large numbers in my head using what I 
later learned was a shift-reduce method.  It was similar to this:


http://www.metacafe.com/watch/742717/human_calculator/

My seventh grade math teacher was so upset with me, he almost struck me 
(physically -- you could get away with that back them).  His reason?  Wasting 
valuable math class time.


The point is, you can train yourself to do this type of thing and look very 
savant-like.  The above link is just one in a series of videos where the teacher 
presents this system.  It takes practice, but not much more than learning the 
standard multiplication table.


Cheers,

Brad


Vladimir Nesov wrote:

Interesting: is it possible to train yourself to run a specially
designed nontrivial inference circuit based on low-base
transformations (e.g. binary)? You start by assigning unique symbols
to its nodes, train yourself to stably perform associations
implementing its junctions, and then assemble it all together by
training yourself to generate a problem as a temporal sequence
(request), so that it can be handled by the overall circuit, and
training to read out the answer and convert it to sequence of e.g.
base-10 digits or base-100 words keying pairs of digits (like in
mnemonic)? Has anyone heard of this attempted? At least the initial
steps look straightforward enough, what kind of obstacles this kind of
experiment can run into?

On Tue, Jul 1, 2008 at 7:43 AM, Linas Vepstas [EMAIL PROTECTED] wrote:

2008/6/30 Terren Suydam [EMAIL PROTECTED]:

savant

I've always theorized that savants can do what they do because
they've been able to get direct access to, and train, a fairly
small number of neurons in their brain, to accomplish highly
specialized (and thus rather unusual) calculations.

I'm thinking specifically of Ramanujan, the Hindi mathematician.
He appears to have had access to a multiply-add type circuit
in his brain, and could do symbolic long division and
multiplication as a result -- I base this on studying some of
the things he came up with -- after a while, it seems to be
clear  how he came up with it (even if the feat is clearly not
reproducible).

In a sense, similar feats are possible by using a modern
computer with a good algebra system.  Simon Plouffe seems
to be a modern-day example of this: he noodles around with
his systems, and finds various interesting relationships that
would otherwise be obscure/unknown.  He does this without
any particularly deep or expansive training in math (whence
some of his friction with real academics).  If Simon could
get a computer-algebra chip implanted in his brain, (i.e.
with a very, very user-freindly user-interface) so that he
could work the algebra system just by thinking about it,
I bet his output would resemble that of Ramanujan a whole
lot more than it already does -- as it were, he's hobbled by
a crappy user interface.

Thus, let me theorize: by studying savants with MRI and
what-not, we may find a way of getting a much better
man-machine interface.  That is, currently, electrodes
are always implanted in motor neurons (or visual cortex, etc)
i.e. in places of the brain with very low levels of abstraction
from the real word. It would be interesting to move up the
level of abstraction, and I think that studying how savants
access the magic circuits in thier brain will open up a
method for high-level interfaces to external computing
machinery.

--linas


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com








---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread William Pearson
2008/6/30 Terren Suydam [EMAIL PROTECTED]:

 Hi Will,

 --- On Mon, 6/30/08, William Pearson [EMAIL PROTECTED] wrote:
 The only way to talk coherently about purpose within
 the computation is to simulate self-organized, embodied
 systems.

 I don't think you are quite getting my system. If you
 had a bunch of
 programs that did the following

 1) created new programs, by trial and error and taking
 statistics of
 variables or getting arbitrary code from the outside.
 2) communicated with each other to try and find programs
 that perform
 services they need.
 3) Bid for computer resources, if a program loses its
 memory resources
 it is selected against, in a way.

 Would this be sufficiently self-organised? If not, why not?
 And the
 computer programs would be as embodied as your virtual
 creatures. They
 would just be embodied within a tacit economy, rather than
 an
 artificial chemistry.

 It boils down to your answer to the question: how are the resources 
 ultimately allocated to the programs?  If you're the one specifying it, via 
 some heuristic or rule, then the purpose is driven by you. If resource 
 allocation is handled by some self-organizing method (this wasn't clear in 
 the article you provided), then I'd say that the system's purpose is 
 self-defined.

I'm not sure how the system qualifies. It seems to be half way between
the two definitions you gave. The programs can have special
instructions in that bid for a specific resource with as much credit
as they want (see my recent message replying to Vladimir Nesov for
more information about banks, bidding and credit). The instructions
can be removed or not done, the amount of credit bid can be changed.
The credit is given to some programs by a fixed function, but they
have instructions they can execute (or not) to give it to other
programs forming an economy. What say you, self-organised or not?

 As for embodiment, my question is, how do your programs receive input?  
 Embodiment, as I define it, requires that inputs are merely reflections of 
 state variables, and not even labeled in any way... i.e. we can't pre-define 
 ontologies. The embodied entity starts from the most unstructured state 
 possible and self-structures whatever inputs it receives.

Bits and bytes from the outside world, or bits and bytes from reading
other programs programing and data. No particular ontology.

 That said, you may very well be doing that and be creating embodied programs 
 in this way... if so, that's cool because I hadn't considered that 
 possibility and I'll be interested to see how you fare.

It is going to take a while. Virtual machine writing is very
unrewarding programming. I have other things to do right now, I'll get
back to the rest of the message in a bit.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread Mike Tintner
Terren:It's to make the larger point that we may be so immersed in our own 
conceptualizations of intelligence - particularly because we live in our 
models and draw on our own experience and introspection to elaborate them - 
that we may have tunnel vision about the possibilities for better or 
different models. Or, we may take for granted huge swaths of what makes us 
so smart, because it's so familiar, or below the radar of our conscious 
awareness, that it doesn't even occur to us to reflect on it.


No 2 is more relevant - AI-ers don't seem to introspect much. It's an irony 
that the way AI-ers think when creating a program bears v. little 
resemblance to the way programmed computers think. (Matt started to broach 
this when he talked a while back of computer programming as an art). But 
AI-ers seem to have no interest in the discrepancy - which again is ironic, 
because analysing it would surely help them with their programming as well 
as the small matter of understanding how general intelligence actually 
works.


In fact  - I just looked - there is a longstanding field on psychology of 
programming. But it seems to share the deficiency of psychology and 
cognitive science generally which is : no study of the 
stream-of-conscious-thought, especially conscious problemsolving. The only 
AI figure I know who did take some interest here was Herbert Simon who 
helped establish the use of verbal protocols.





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Savants and user-interfaces [was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread Linas Vepstas
2008/7/1 Vladimir Nesov [EMAIL PROTECTED]:
 On Tue, Jul 1, 2008 at 10:02 AM, Linas Vepstas [EMAIL PROTECTED] wrote:

 What are you trying to accomplish here? I don't see where
 you are trying to go with this.

 I don't think a human can consciously train one or two neurons
 to do something, we train millions at a time. -- I'm guessing
 savants only employ a few tens of million neurons (give or take a
 few orders of magnitude) -- to do their stuff.

 Still, an array of 1K by 1K electrodes is well within current
 technology, we just don't know where to hook this up to,
 with the exception of simple motor areas, retina, and bit
 of the auditory circuits.


 Certainly nothing to do with individual neurons. Basically, it's
 possible to train a finite state automaton in the mind through
 association. You see a certain combination of properties, you think
 the symbol that describes this combination. If such automaton is
 trained not just to handle natural data (such as language), but to a
 specifically designed circuit plan, it'll probably be possible to use
 it as a directly accessible 'add-on' to the brain that implements
 specific simple function efficiently, such as some operation with
 numbers using a clever algorithm in a way alien to normal deliberative
 learning. You don't learn to perform a task, but to execute individual
 steps of an algorithm that performs a task.

Yes, but isn't the interesting case in the other direction?
We have ordinary computers that can already do quite
well computationally. What we *don't* have a a good
man-machine interface.  For example, modern disk drives
hold more bytes than the human mind can.  I don't want
to train myself for feats of memorization, I want automatic
and instant access to a disk drive.

So, perhaps by studying savants who are capable of
memorization feats, perhaps we can find the sort of neural
circuitry needed to interface to a disk drive. It is, perhaps
because savants have these unusual abilities, that it sheds
light on the kind of wiring that would be needed for electrodes.

--linas


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread Terren Suydam

Will,

I think the original issue was about purpose. In your system, since a human is 
the one determining which programs are performing the best, the purpose is 
defined in the mind of the human. Beyond that, it certainly sounds as if it is 
a self-organizing system. 

Terren

--- On Tue, 7/1/08, William Pearson [EMAIL PROTECTED] wrote:
 I'm not sure how the system qualifies. It seems to be
 half way between
 the two definitions you gave. The programs can have special
 instructions in that bid for a specific resource with as
 much credit
 as they want (see my recent message replying to Vladimir
 Nesov for
 more information about banks, bidding and credit). The
 instructions
 can be removed or not done, the amount of credit bid can be
 changed.
 The credit is given to some programs by a fixed function,
 but they
 have instructions they can execute (or not) to give it to
 other
 programs forming an economy. What say you, self-organised
 or not?




  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread Terren Suydam

Hi Mike,

My points about the pitfalls of theorizing about intelligence apply to any and 
all humans who would attempt it - meaning, it's not necessary to characterize 
AI folks in one way or another. There are any number of aspects of intelligence 
we could highlight that pose a challenge to orthodox models of intelligence, 
but the bigger point is that there are fundamental limits to the ability of an 
intelligence to observe itself, in exactly the same way that an eye cannot see 
itself. 

Consciousness and intelligence are present in every possible act of 
contemplation, so it is impossible to gain a vantage point of intelligence from 
outside of it. And that's exactly what we pretend to do when we conceptualize 
it within an artificial construct. This is the principle conceit of AI, that we 
can understand intelligence in an objective way, and model it well enough to 
reproduce by design.

Terren

--- On Tue, 7/1/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Terren:It's to make the larger point that we may be so
 immersed in our own 
 conceptualizations of intelligence - particularly because
 we live in our 
 models and draw on our own experience and introspection to
 elaborate them - 
 that we may have tunnel vision about the possibilities for
 better or 
 different models. Or, we may take for granted huge swaths
 of what makes us 
 so smart, because it's so familiar, or below the radar
 of our conscious 
 awareness, that it doesn't even occur to us to reflect
 on it.
 
 No 2 is more relevant - AI-ers don't seem to introspect
 much. It's an irony 
 that the way AI-ers think when creating a program bears v.
 little 
 resemblance to the way programmed computers think. (Matt
 started to broach 
 this when he talked a while back of computer programming as
 an art). But 
 AI-ers seem to have no interest in the discrepancy - which
 again is ironic, 
 because analysing it would surely help them with their
 programming as well 
 as the small matter of understanding how general
 intelligence actually 
 works.
 
 In fact  - I just looked - there is a longstanding field on
 psychology of 
 programming. But it seems to share the deficiency of
 psychology and 
 cognitive science generally which is : no study of the 
 stream-of-conscious-thought, especially conscious
 problemsolving. The only 
 AI figure I know who did take some interest here was
 Herbert Simon who 
 helped establish the use of verbal protocols.
 
 
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Terren Suydam

Hi Ben,

I don't think the flaw you have identified matters to the main thrust of 
Richard's argument - and if you haven't summarized Richard's position 
precisely, you have summarized mine. :-]

You're saying the flaw in that position is that prediction of complex networks 
might merely be a matter of computational difficulty, rather than fundamentally 
intractability. But any formally defined complex system is going to be 
computable in principle. We can always predict such a system with infinite 
computing power. That doesn't make it tractable, or open to understanding, 
because obviously real understanding can't be dependent infinite computing 
power.

The question of fundamental intractability comes down to the degree with which 
we can make predictions about the global level from the local. And let's hope 
there's progress to be made there because each discovery will make our lives 
easier, to those of us who would try to understand something like the brain or 
the body or even just the cell. Or even just folding proteins!

But it seems pretty obvious to me anyway that we will never be able to predict 
the weather with any precision without doing an awful lot of computation. 

And what is our mind but the weather in our brains?

Terren

--- On Sun, 6/29/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 From: Ben Goertzel [EMAIL PROTECTED]
 Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN 
 AGI
 To: agi@v2.listbox.com
 Date: Sunday, June 29, 2008, 10:44 PM
 Richard,
 
 I think that it would be possible to formalize your
 complex systems argument
 mathematically, but I don't have time to do so right
 now.
 
  Or, then again . perhaps I am wrong:  maybe you
 really *cannot*
  understand anything except math?
 
 It's not the case that I can only understand math --
 however, I have a
 lot of respect
 for the power of math to clarify disagreements.  Without
 math, arguments often
 proceed in a confused way because different people are
 defining terms
 differently,a
 and don't realize it.
 
 But, I agree math is not the only kind of rigor.  I would
 be happy
 with a very careful,
 systematic exposition of your argument along the lines of
 Spinoza or the early
 Wittgenstein.  Their arguments were not mathematical, but
 were very rigorous
 and precisely drawn -- not slippery.
 
  Perhaps you have no idea what the actual
  argument is, and that has been the problem all along? 
 I notice that you
  avoided answering my request that you summarize your
 argument against the
  complex systems problem ... perhaps you are just
 confused about what the
  argument actually is, and have been confused right
 from the beginning?
 
 In a nutshell, it seems you are arguing that general
 intelligence is
 fundamentally founded
 on emergent properties of complex systems, and that
 it's not possible for us to
 figure out analytically how these emergent properties
 emerge from the
 lower-level structures
 and dynamics of the complex systems involved.   Evolution,
 you
 suggest, figured out
 some complex systems that give rise to the appropriate
 emergent
 properties to produce
 general intelligence.  But evolution did not do this
 figuring-out in
 an analytical way, rather
 via its own special sort of directed trial and
 error.   You suggest
 that to create a generally
 intelligent system, we should create a software framework
 that makes
 it very easy to
 experiment with  different sorts of complex systems, so
 that we can
 then figure out
 (via some combination of experiment, analysis, intuition,
 theory,
 etc.) how to create a
 complex system that gives rise to the emergent properties
 associated
 with general
 intelligence.
 
 I'm sure the above is not exactly how you'd phrase
 your argument --
 and it doesn't
 capture all the nuances -- but I was trying to give a
 compact and approximate
 formulation.   If you'd like to give an alternative,
 equally compact
 formulation, that
 would be great.
 
 I think the flaw of your argument lies in your definition
 of
 complexity, and that this
 would be revealed if you formalized your argument more
 fully.  I think
 you define
 complexity as a kind of fundamental
 irreducibility that the human
 brain does not possess,
 and that engineered AGI systems need not possess.  I think
 that real
 systems display
 complexity which makes it **computationally difficult** to
 explain
 their emergent properties
 in terms of their lower-level structures and dynamics, but
 not as
 fundamentally intractable
 as you presume.
 
 But because you don't formalize your notion of
 complexity adequately,
 it's not possible
 to engage you in rational argumentation regarding the deep
 flaw at the
 center of your
 argument.
 
 However, I cannot prove rigorously that the brain is NOT
 complex in
 the overly strong
 sense you  allude it is ... and nor can I prove rigorously
 that a
 design like Novamente Cognition
 Engine or OpenCog Prime will give rise to the emergent
 properties

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Ben Goertzel
But, we don't need to be able to predict the thoughts of an AGI system
in detail, to be able to architect an AGI system that has thoughts...

I agree that predicting the thoughts of an AGI system in detail is
going to be pragmatically impossible ... but I don't agree that
predicting **which** AGI designs can lead to the emergent properties
corresponding to general intelligence, is pragmatically impossible to
do in an analytical and rational way ...

Similarly, I could engineer an artificial weather system displaying
hurricanes, whirlpools, or whatever phenomena you ask me for -- based
on my general understanding of the Navier-stokes equation.   Even
though I could not, then, predict the specific dynamics of those
hurricanes, whirlpools, etc.

We lack the equivalent of the Navier-stokes equation for thoughts.
But we can still arrive at reasonable analytic understandings of
appropriately constrained and formalised AGI designs, with the power
to achieve general intelligence...

ben g

On Mon, Jun 30, 2008 at 1:55 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 Hi Ben,

 I don't think the flaw you have identified matters to the main thrust of 
 Richard's argument - and if you haven't summarized Richard's position 
 precisely, you have summarized mine. :-]

 You're saying the flaw in that position is that prediction of complex 
 networks might merely be a matter of computational difficulty, rather than 
 fundamentally intractability. But any formally defined complex system is 
 going to be computable in principle. We can always predict such a system with 
 infinite computing power. That doesn't make it tractable, or open to 
 understanding, because obviously real understanding can't be dependent 
 infinite computing power.

 The question of fundamental intractability comes down to the degree with 
 which we can make predictions about the global level from the local. And 
 let's hope there's progress to be made there because each discovery will make 
 our lives easier, to those of us who would try to understand something like 
 the brain or the body or even just the cell. Or even just folding proteins!

 But it seems pretty obvious to me anyway that we will never be able to 
 predict the weather with any precision without doing an awful lot of 
 computation.

 And what is our mind but the weather in our brains?

 Terren

 --- On Sun, 6/29/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 From: Ben Goertzel [EMAIL PROTECTED]
 Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE 
 IN AGI
 To: agi@v2.listbox.com
 Date: Sunday, June 29, 2008, 10:44 PM
 Richard,

 I think that it would be possible to formalize your
 complex systems argument
 mathematically, but I don't have time to do so right
 now.

  Or, then again . perhaps I am wrong:  maybe you
 really *cannot*
  understand anything except math?

 It's not the case that I can only understand math --
 however, I have a
 lot of respect
 for the power of math to clarify disagreements.  Without
 math, arguments often
 proceed in a confused way because different people are
 defining terms
 differently,a
 and don't realize it.

 But, I agree math is not the only kind of rigor.  I would
 be happy
 with a very careful,
 systematic exposition of your argument along the lines of
 Spinoza or the early
 Wittgenstein.  Their arguments were not mathematical, but
 were very rigorous
 and precisely drawn -- not slippery.

  Perhaps you have no idea what the actual
  argument is, and that has been the problem all along?
 I notice that you
  avoided answering my request that you summarize your
 argument against the
  complex systems problem ... perhaps you are just
 confused about what the
  argument actually is, and have been confused right
 from the beginning?

 In a nutshell, it seems you are arguing that general
 intelligence is
 fundamentally founded
 on emergent properties of complex systems, and that
 it's not possible for us to
 figure out analytically how these emergent properties
 emerge from the
 lower-level structures
 and dynamics of the complex systems involved.   Evolution,
 you
 suggest, figured out
 some complex systems that give rise to the appropriate
 emergent
 properties to produce
 general intelligence.  But evolution did not do this
 figuring-out in
 an analytical way, rather
 via its own special sort of directed trial and
 error.   You suggest
 that to create a generally
 intelligent system, we should create a software framework
 that makes
 it very easy to
 experiment with  different sorts of complex systems, so
 that we can
 then figure out
 (via some combination of experiment, analysis, intuition,
 theory,
 etc.) how to create a
 complex system that gives rise to the emergent properties
 associated
 with general
 intelligence.

 I'm sure the above is not exactly how you'd phrase
 your argument --
 and it doesn't
 capture all the nuances -- but I was trying to give a
 compact and approximate
 formulation.   If you'd like to give

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Lukasz Stafiniak
On Mon, Jun 30, 2008 at 8:07 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 By the way, just wanted to point out a beautifully simple example - perhaps 
 the simplest - of an irreducibility in complex systems.

 Individual molecular interactions are symmetric in time, they work the same 
 forwards and backwards. Yet diffusion, which is nothing more than the 
 aggregate of molecular interactions, is asymmetric. Figure that one out.

This is just statistical mechanics. The interesting thing is that we
make an opportunistic assumption, that any colliding particles are
independent before collision (this introduces the time arrow), which
is then empirically confirmed by the fact that derived properties
agree with the phenomenological theory of entropy.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Terren Suydam

--- On Mon, 6/30/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 but I don't agree that predicting **which** AGI designs can lead
 to the emergent properties corresponding to general intelligence,
 is pragmatically impossible to do in an analytical and rational way ...

OK, I grant you that you may be able to do that. I believe that we can be 
extremely clever in this regard. An example of that is an implementation of a 
Turing Machine within the Game of Life:

http://rendell-attic.org/gol/tm.htm

What a beautiful construction. But it's completely contrived. What you're 
suggesting is equivalent, because your design is contrived by your own 
intelligence. [I understand that within the Novamente idea is room for 
non-deterministic (for practical purposes) behavior, so it doesn't suffer from 
the usual complexity-inspired criticisms of purely logical systems.]

But whatever achievement you make, it's just one particular design that may 
prove effective in some set of domains. And there's the rub - the fact that 
your design is at least partially static will limit its applicability in some 
set of domains. I make this argument more completely here:

http://www.machineslikeus.com/cms/news/design-bad-or-why-artificial-intelligence-needs-artificial-life
or http://tinyurl.com/3coavb

If you design a robot, you limit its degrees of freedom. And there will be 
environments it cannot get around in. By contrast, if you have a design that is 
capable of changing itself (even if that means from generation to generation), 
then creative configurations can be discovered. The same basic idea works in 
the mental arena as well. If you specify the mental machinery, there will be 
environments it cannot get around in, so to speak. There will be important ways 
in which it is unable to adapt. You are limiting your design by your own 
intelligence, which though considerable, is no match for the creativity 
manifest in a single biological cell.

Terren


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Ben Goertzel
I agree that all designed systems have limitations, but I also suggest
that all evolved systems have limitations.

This is just the no free lunch theorem -- in order to perform better
than random search at certain optimization tasks, a system needs to
have some biases built in, and these biases will cause it to work
WORSE than random search on some other optimization tasks.

No AGI based on finite resources will ever be **truly** general, be it
an engineered or evolved systems

Evolved systems are far from being beyond running into dead ends ...
their adaptability is far from infinite ... the evolutionary process
itself may be endlessly creative, but in that sense so may be the
self-modifying process of an engineered AGI ...

-- Ben G

On Mon, Jun 30, 2008 at 3:17 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 --- On Mon, 6/30/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 but I don't agree that predicting **which** AGI designs can lead
 to the emergent properties corresponding to general intelligence,
 is pragmatically impossible to do in an analytical and rational way ...

 OK, I grant you that you may be able to do that. I believe that we can be 
 extremely clever in this regard. An example of that is an implementation of a 
 Turing Machine within the Game of Life:

 http://rendell-attic.org/gol/tm.htm

 What a beautiful construction. But it's completely contrived. What you're 
 suggesting is equivalent, because your design is contrived by your own 
 intelligence. [I understand that within the Novamente idea is room for 
 non-deterministic (for practical purposes) behavior, so it doesn't suffer 
 from the usual complexity-inspired criticisms of purely logical systems.]

 But whatever achievement you make, it's just one particular design that may 
 prove effective in some set of domains. And there's the rub - the fact that 
 your design is at least partially static will limit its applicability in some 
 set of domains. I make this argument more completely here:

 http://www.machineslikeus.com/cms/news/design-bad-or-why-artificial-intelligence-needs-artificial-life
 or http://tinyurl.com/3coavb

 If you design a robot, you limit its degrees of freedom. And there will be 
 environments it cannot get around in. By contrast, if you have a design that 
 is capable of changing itself (even if that means from generation to 
 generation), then creative configurations can be discovered. The same basic 
 idea works in the mental arena as well. If you specify the mental machinery, 
 there will be environments it cannot get around in, so to speak. There will 
 be important ways in which it is unable to adapt. You are limiting your 
 design by your own intelligence, which though considerable, is no match for 
 the creativity manifest in a single biological cell.

 Terren





 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be
first overcome  - Dr Samuel Johnson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Lukasz Stafiniak
On Mon, Jun 30, 2008 at 8:07 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 By the way, just wanted to point out a beautifully simple example - perhaps 
 the simplest - of an irreducibility in complex systems.

 Individual molecular interactions are symmetric in time, they work the same 
 forwards and backwards. Yet diffusion, which is nothing more than the 
 aggregate of molecular interactions, is asymmetric. Figure that one out.

This is just statistical mechanics. The interesting thing is that we
make an opportunistic assumption, that any colliding particles are
independent before collision (this introduces the time arrow), which
is then empirically confirmed by the fact that derived properties
agree with the phenomenological theory of entropy.

P.S. The biggest issue that spoiled my joy of reading Permutation
City is that you cannot simulate dynamic systems ( = solve
numerically differential equations) out-of-order, you need to know
time t to compute time t+1 (or, alternatively, you need to know
t+2), the same goes for space, I presume you need to know x-1,x,x+1
to compute the next-step x.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Russell Wallace
On Mon, Jun 30, 2008 at 8:31 AM, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
 P.S. The biggest issue that spoiled my joy of reading Permutation
 City is that you cannot simulate dynamic systems ( = solve
 numerically differential equations) out-of-order, you need to know
 time t to compute time t+1 (or, alternatively, you need to know
 t+2)

Yes...

 the same goes for space, I presume you need to know x-1,x,x+1
 to compute the next-step x.

No, x+1 is not a function of x. That's the _definition_ of time: a
dimension in which t+1 is a function of t.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Terren Suydam

Ben,

I agree, an evolved design has limits too, but the key difference between a 
contrived design and one that is allowed to evolve is that the evolved 
critter's intelligence is grounded in the context of its own 'experience', 
whereas the contrived one's intelligence is grounded in the experience of its 
creator, and subject to the limitations built into that conception of 
intelligence. For example, we really have no idea how we arrive at spontaneous 
insights (in the shower, for example). A chess master suddenly sees the 
game-winning move. We can be fairly certain that often, these insights are not 
the product of logical analysis. So if our conception of intelligence fails to 
explain these important aspects, our designs based on those conceptions will 
fail to exhibit them. An evolved intelligence, on the other hand, is not 
limited in this way, and has the potential to exhibit intelligence in ways 
we're not capable of comprehending.

[btw, I'm using the scare quotes around the word experience as it applies to 
AGI because it's a controversial word and I hope to convey the basic idea about 
experience without getting into technical details about it. I can get into 
that, if anyone thinks it necessary, just didn't want to get bogged down.]

Furthermore, there are deeper epistemological issues with the difference 
between design and self-organization that get into the notion of autonomy as 
well (i.e., designs lack autonomy to the degree they are specified), but I'll 
save that for when I feel like putting everyone to sleep :-]

Terren

PS. As an aside, I believe spontaneous insight is likely to be an example of 
self-organized criticality, which is a description of the behavior of 
earthquakes, avalanches, and the punctuated equilibrium model of evolution. 
Which is to say, a sudden insight is like an avalanche of mental 
transformations, triggered by some minor event but the result of a build-up of 
dynamic tension. Self-organized criticality is 
explained by the late Per Bak in _How Nature Works_, a short, excellent read 
and an brilliant example of scientific and mathematical progress in the realm 
of complexity. 

--- On Mon, 6/30/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 I agree that all designed systems have limitations, but I
 also suggest
 that all evolved systems have limitations.
 
 This is just the no free lunch theorem -- in
 order to perform better
 than random search at certain optimization tasks, a system
 needs to
 have some biases built in, and these biases will cause it
 to work
 WORSE than random search on some other optimization tasks.
 
 No AGI based on finite resources will ever be **truly**
 general, be it
 an engineered or evolved systems
 
 Evolved systems are far from being beyond running into dead
 ends ...
 their adaptability is far from infinite ... the
 evolutionary process
 itself may be endlessly creative, but in that sense so may
 be the
 self-modifying process of an engineered AGI ...
 
 -- Ben G
 
 On Mon, Jun 30, 2008 at 3:17 AM, Terren Suydam
 [EMAIL PROTECTED] wrote:
 
  --- On Mon, 6/30/08, Ben Goertzel
 [EMAIL PROTECTED] wrote:
  but I don't agree that predicting **which**
 AGI designs can lead
  to the emergent properties corresponding to
 general intelligence,
  is pragmatically impossible to do in an analytical
 and rational way ...
 
  OK, I grant you that you may be able to do that. I
 believe that we can be extremely clever in this regard. An
 example of that is an implementation of a Turing Machine
 within the Game of Life:
 
  http://rendell-attic.org/gol/tm.htm
 
  What a beautiful construction. But it's completely
 contrived. What you're suggesting is equivalent, because
 your design is contrived by your own intelligence. [I
 understand that within the Novamente idea is room for
 non-deterministic (for practical purposes) behavior, so it
 doesn't suffer from the usual complexity-inspired
 criticisms of purely logical systems.]
 
  But whatever achievement you make, it's just one
 particular design that may prove effective in some set of
 domains. And there's the rub - the fact that your
 design is at least partially static will limit its
 applicability in some set of domains. I make this argument
 more completely here:
 
 
 http://www.machineslikeus.com/cms/news/design-bad-or-why-artificial-intelligence-needs-artificial-life
  or http://tinyurl.com/3coavb
 
  If you design a robot, you limit its degrees of
 freedom. And there will be environments it cannot get
 around in. By contrast, if you have a design that is
 capable of changing itself (even if that means from
 generation to generation), then creative configurations can
 be discovered. The same basic idea works in the mental arena
 as well. If you specify the mental machinery, there will be
 environments it cannot get around in, so to speak. There
 will be important ways in which it is unable to adapt. You
 are limiting your design by your own intelligence, which
 though considerable, is 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Terren Suydam

As far as I can tell, all you've done is give the irreducibility a name: 
statistical mechanics. You haven't explained how the arrow of time emerges 
from the local level to the global. Or, maybe I just don't understand it... can 
you dumb it down for me?

Terren

--- On Mon, 6/30/08, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
 
  By the way, just wanted to point out a beautifully
 simple example - perhaps the simplest - of an
 irreducibility in complex systems.
 
  Individual molecular interactions are symmetric in
 time, they work the same forwards and backwards. Yet
 diffusion, which is nothing more than the aggregate of
 molecular interactions, is asymmetric. Figure that one out.
 
 This is just statistical mechanics. The interesting thing
 is that we
 make an opportunistic assumption, that any colliding
 particles are
 independent before collision (this introduces the time
 arrow), which
 is then empirically confirmed by the fact that
 derived properties
 agree with the phenomenological theory of
 entropy.
 
 P.S. The biggest issue that spoiled my joy of reading
 Permutation
 City is that you cannot simulate dynamic systems ( =
 solve
 numerically differential equations) out-of-order, you need
 to know
 time t to compute time t+1 (or,
 alternatively, you need to know
 t+2), the same goes for space, I presume you
 need to know x-1,x,x+1
 to compute the next-step x.
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread William Pearson
2008/6/30 Terren Suydam [EMAIL PROTECTED]:

 Ben,

 I agree, an evolved design has limits too, but the key difference between a 
 contrived design and one that is allowed to evolve is that the evolved
 critter's intelligence is grounded in the context of its own 'experience', 
 whereas the contrived one's intelligence is grounded in the experience of its
 creator, and subject to the limitations built into that conception of 
 intelligence. For example, we really have no idea how we arrive at spontaneous
 insights (in the shower, for example). A chess master suddenly sees the 
 game-winning move. We can be fairly certain that often, these insights are not
 the product of logical analysis. So if our conception of intelligence fails 
 to explain these important aspects, our designs based on those conceptions 
 will
 fail to exhibit them. An evolved intelligence, on the other hand, is not 
 limited in this way, and has the potential to exhibit intelligence in ways 
 we're not
 capable of comprehending.

I'm seeking to do something half way between what you suggest (from
bacterial systems to human alife) and AI. I'd be curious to know
whether you think it would suffer from the same problems.

First are we agreed that the von Neumann model of computing has no
hidden bias to its problem solving capabilities. It might be able to
do some jobs more efficiently than other and need lots of memory to do
others but it is not particularly suited to learning chess or running
down a gazelle. Which means it can be reprogrammed to do either.

However it has no guide to what it should be doing, so can become
virus infested or subverted. It has a purpose but we can't explicitly
define it. So let us try and put in the most minimal guide that we can
so we don't give it a specific goal, just a tendency to favour certain
activities or programs. How to do this? Form and economy based on
reinforcement signals, those that get more reinforcement signals can
outbid the others for control of system resources.

This is obviously reminiscent of tierra and a million and one other
alife system. The difference being is that I want the whole system to
exhibit intelligence. Any form of variation is allowed, from random to
getting in programs from the outside. It should be able to change the
whole from the OS level up based on the variation.

I agree that we want the systems we make to be free of our design
constraints long term, that is eventually correct all the errors and
oversimplifications or gaps we left. But I don't see the need to go
all the way back to bacteria. Even then you would need to design the
system correctly in terms of chemical concentrations. I think both
would count as the passive approach* to helping solve the problem,
yours is more indirect than is needed I think.

  Will Pearson

* http://www.mail-archive.com/agi@v2.listbox.com/msg11399.html


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Vladimir Nesov
On Mon, Jun 30, 2008 at 10:34 PM, William Pearson [EMAIL PROTECTED] wrote:

 I'm seeking to do something half way between what you suggest (from
 bacterial systems to human alife) and AI. I'd be curious to know
 whether you think it would suffer from the same problems.

 First are we agreed that the von Neumann model of computing has no
 hidden bias to its problem solving capabilities. It might be able to
 do some jobs more efficiently than other and need lots of memory to do
 others but it is not particularly suited to learning chess or running
 down a gazelle. Which means it can be reprogrammed to do either.

 However it has no guide to what it should be doing, so can become
 virus infested or subverted. It has a purpose but we can't explicitly
 define it. So let us try and put in the most minimal guide that we can
 so we don't give it a specific goal, just a tendency to favour certain
 activities or programs.

It is a wrong level of organization: computing hardware is the physics
of computation, it isn't meant to implement specific algorithms, so I
don't quite see what you are arguing.


 How to do this? Form and economy based on
 reinforcement signals, those that get more reinforcement signals can
 outbid the others for control of system resources.

Where do reinforcement signals come from? What does this specification
improve over natural evolution that needed billions of years to get
here (that is, why do you expect any results in the forseable future)?


 This is obviously reminiscent of tierra and a million and one other
 alife system. The difference being is that I want the whole system to
 exhibit intelligence. Any form of variation is allowed, from random to
 getting in programs from the outside. It should be able to change the
 whole from the OS level up based on the variation.

What is your meaning of `intelligence'? I now see it as merely the
efficiency of optimization process that drives the environment towards
higher utility, according to whatever criterion (reinforcement, in
your case). In this view, how does I'll do the same, but with
intelligence differ from I'll do the same, but better?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Terren Suydam

Hi William,

A Von Neumann computer is just a machine. It's only purpose is to compute. When 
you get into higher-level purpose, you have to go up a level to the stuff being 
computed. Even then, the purpose is in the mind of the programmer. The only way 
to talk coherently about purpose within the computation is to simulate 
self-organized, embodied systems.

And I applaud your intuition to make the whole system intelligent. One of my 
biggest criticisms of traditional AI philosophy is over-emphasis on the agent. 
Indeed, the ideal simulation, in my mind, is one in which the boundary between 
agent and environment is blurry.  In nature, for example, at low-enough levels 
of description it is impossible to find a boundary between the two, because the 
entities at that level are freely exchanged.

You are right that starting with bacteria is too indirect, if your goal is to 
achieve AGI in something like decades. It would certainly take an enormous 
amount of time and computation to get from there to human-level AI and beyond, 
perhaps a hundred years or more. But you're asking, aren't there shortcuts we 
can take that don't limit the field of potential intelligence in important 
ways. 

For example, starting with bacteria means we have to let multi-cellular 
organisms evolve on their own in a virtual geometry. That project alone is an 
enormous challenge. So let's skip it and go right to the multi-cellular design. 
The trouble is, our design of the multi-cellular organism is limiting. 
Alternative designs become impossible. The question at that point is, are we 
excluding any important possibilities for intelligence if we build in our 
assumptions about what is necessary to support it, on a low-level basis. In 
what ways is our designed brain leaving out some key to adapting to unforeseen 
domains?

One of the basic threads of scientific progress is the ceaseless denigration of 
the idea that there is something special about humans. Pretending that we can 
solve AGI by mimicking top-down high-level human reasoning is another example 
of that kind of hubris, and eventually, that idea will fall too. 

Terren 



--- On Mon, 6/30/08, William Pearson [EMAIL PROTECTED] wrote:

  Ben,
 
  I agree, an evolved design has limits too, but the key
 difference between a contrived design and one that is
 allowed to evolve is that the evolved
  critter's intelligence is grounded in the context
 of its own 'experience', whereas the contrived
 one's intelligence is grounded in the experience of its
  creator, and subject to the limitations built into
 that conception of intelligence. For example, we really
 have no idea how we arrive at spontaneous
  insights (in the shower, for example). A chess master
 suddenly sees the game-winning move. We can be fairly
 certain that often, these insights are not
  the product of logical analysis. So if our conception
 of intelligence fails to explain these important aspects,
 our designs based on those conceptions will
  fail to exhibit them. An evolved intelligence, on the
 other hand, is not limited in this way, and has the
 potential to exhibit intelligence in ways we're not
  capable of comprehending.
 
 I'm seeking to do something half way between what you
 suggest (from
 bacterial systems to human alife) and AI. I'd be
 curious to know
 whether you think it would suffer from the same problems.
 
 First are we agreed that the von Neumann model of computing
 has no
 hidden bias to its problem solving capabilities. It might
 be able to
 do some jobs more efficiently than other and need lots of
 memory to do
 others but it is not particularly suited to learning chess
 or running
 down a gazelle. Which means it can be reprogrammed to do
 either.
 
 However it has no guide to what it should be doing, so can
 become
 virus infested or subverted. It has a purpose but we
 can't explicitly
 define it. So let us try and put in the most minimal guide
 that we can
 so we don't give it a specific goal, just a tendency to
 favour certain
 activities or programs. How to do this? Form and economy
 based on
 reinforcement signals, those that get more reinforcement
 signals can
 outbid the others for control of system resources.
 
 This is obviously reminiscent of tierra and a million and
 one other
 alife system. The difference being is that I want the whole
 system to
 exhibit intelligence. Any form of variation is allowed,
 from random to
 getting in programs from the outside. It should be able to
 change the
 whole from the OS level up based on the variation.
 
 I agree that we want the systems we make to be free of our
 design
 constraints long term, that is eventually correct all the
 errors and
 oversimplifications or gaps we left. But I don't see
 the need to go
 all the way back to bacteria. Even then you would need to
 design the
 system correctly in terms of chemical concentrations. I
 think both
 would count as the passive approach* to helping solve the
 problem,
 yours is more 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Mike Tintner
Terren:One of the basic threads of scientific progress is the ceaseless 
denigration of the idea that there is something special about humans


Not quite so. There is a great deal of exceptionalism in science - hence 
evolutionary psychology actually only deals with human evolution. If there 
were a true all-species evolutionary psychology, that really did look at the 
evolution of the mind through all species, you wouldn't get what will come 
to be seen as the mechanistic absurdity of trying to create an AGI starting 
at human level. To twist a favourite analogy of AGI-ers, that's like trying 
to start mechanical invention at the airplane stage, and jump the billions 
of steps from rock tools and wheels on - or trying to invent a computer 
before electricity has been discovered. 





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread William Pearson
Hello Terren

 A Von Neumann computer is just a machine. It's only purpose is to compute.
 When you get into higher-level purpose, you have to go up a level to the 
 stuff being computed. Even then, the purpose is in the mind of the programmer.

What I don't see is why your simulation gets away from this, where as
my architecture doesn't.  Read the linked post in the previous
message, if you want to understand more about the philosophy of the
system.

The only way to talk coherently about purpose within the computation is to 
simulate self-organized, embodied systems.

I don't think you are quite getting my system. If you had a bunch of
programs that did the following

1) created new programs, by trial and error and taking statistics of
variables or getting arbitrary code from the outside.
2) communicated with each other to try and find programs that perform
services they need.
3) Bid for computer resources, if a program loses its memory resources
it is selected against, in a way.

Would this be sufficiently self-organised? If not, why not? And the
computer programs would be as embodied as your virtual creatures. They
would just be embodied within a tacit economy, rather than an
artificial chemistry.

 And I applaud your intuition to make the whole system intelligent. One of my 
 biggest criticisms of traditional AI philosophy is over-emphasis on the 
 agent. Indeed, the ideal simulation, in my mind, is one in which the boundary 
 between agent and environment is blurry.  In nature, for example, at 
 low-enough levels of description it is impossible to find a boundary between 
 the two, because the entities at that level are freely exchanged.

 You are right that starting with bacteria is too indirect, if your goal is to 
 achieve AGI in something like decades. It would certainly take an enormous 
 amount of time and computation to get from there to human-level AI and 
 beyond, perhaps a hundred years or more. But you're asking, aren't there 
 shortcuts we can take that don't limit the field of potential intelligence in 
 important ways.

If you take this attitude you would have to ask yourself whether
implementing your simulation on a classical computer is not cutting
off the ability to create intelligence. Perhaps quantum affects are
important in whether a system can produce intelligence. Protein
folding probably wouldn't be the same.

You have to at some point simplify. I'm going to have my system have
as many degrees of freedom to vary as a stored program computer (or as
near as I can make it). Whilst having the internal programs
self-organise and vary in ways that would make a normal stored program
computer become unstable.  Any simulations you do on a computer cannot
have any more degrees of freedom.

 For example, starting with bacteria means we have to let multi-cellular 
 organisms evolve on their own in a virtual geometry. That project alone is an 
 enormous challenge. So let's skip it and go right to the multi-cellular 
 design. The trouble is, our design of the multi-cellular organism is 
 limiting. Alternative designs become impossible.

What do you mean by design here? Do you mean an abstract multicellular
cell model or do you mean design as in what Tom Ray (you do know
Tierra right, I can use this as a common language?) did with his first
self replicator, by creating an artificial genome. I can see problems
with the first in restricting degrees of freedom, but the second, the
degrees of freedom are still there to be acted on by the pressures of
variation within the system. Even though Tom Ray built a certain type
of replicator, they still managed to replicate in other ways, the one
I can remember is stealing other peoples replication machinery as
parasites.

Lets say you started with an artificial chemistry. You could then
design within that chemistry a replicator, then test that replicator.
See if the variation is working okay. Then design a multicellular
variant, by changing its genome. It could still slip back to single
cellularity and find a different way to multicellularity. The degrees
of freedom do not go away the second a human starts to design
something (else genetically modified foods would not be such a thorny
issue), you just got to allow the forces of variation to be able to
act upon them.

 The question at that point is, are we excluding any important possibilities 
 for intelligence if we build in our assumptions about what is necessary to 
 support it, on a low-level basis. In what ways is our designed brain leaving 
 out some key to adapting to unforeseen domains?

Just apply a patch :P Or have an architecture that is capable of
supporting a self-patching system. I have no fixed design for an AI
myself. Intelligence means winning, winning requires flexibility.

 One of the basic threads of scientific progress is the ceaseless denigration 
 of the idea that there is something special about humans. Pretending that we 
 can solve AGI by mimicking top-down high-level human 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Terren Suydam

Hi Mike,

Evidently I didn't communicate that so clearly because I agree with you 100%.

Terren

--- On Mon, 6/30/08, Mike Tintner [EMAIL PROTECTED] wrote:
 Terren:One of the basic threads of scientific progress is
 the ceaseless 
 denigration of the idea that there is something special
 about humans
 
 Not quite so. There is a great deal of exceptionalism in
 science - hence 
 evolutionary psychology actually only deals
 with human evolution. If there 
 were a true all-species evolutionary psychology, that
 really did look at the 
 evolution of the mind through all species, you wouldn't
 get what will come 
 to be seen as the mechanistic absurdity of trying to create
 an AGI starting 
 at human level. To twist a favourite analogy of AGI-ers,
 that's like trying 
 to start mechanical invention at the airplane stage, and
 jump the billions 
 of steps from rock tools and wheels on - or trying to
 invent a computer 
 before electricity has been discovered. 
 
 
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread William Pearson
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
 On Mon, Jun 30, 2008 at 10:34 PM, William Pearson [EMAIL PROTECTED] wrote:

 I'm seeking to do something half way between what you suggest (from
 bacterial systems to human alife) and AI. I'd be curious to know
 whether you think it would suffer from the same problems.

 First are we agreed that the von Neumann model of computing has no
 hidden bias to its problem solving capabilities. It might be able to
 do some jobs more efficiently than other and need lots of memory to do
 others but it is not particularly suited to learning chess or running
 down a gazelle. Which means it can be reprogrammed to do either.

 However it has no guide to what it should be doing, so can become
 virus infested or subverted. It has a purpose but we can't explicitly
 define it. So let us try and put in the most minimal guide that we can
 so we don't give it a specific goal, just a tendency to favour certain
 activities or programs.

 It is a wrong level of organization: computing hardware is the physics
 of computation, it isn't meant to implement specific algorithms, so I
 don't quite see what you are arguing.


I'm not implementing a specific algorithm I am controlling how
resources are allocated. Currently architecture does whatever the
kernel says, from memory allocation to irq allocation. Instead of this
my architecture would allow any program to bid credit for a resource.
The one that bids the most wins and spends its credit. Certain
resources like output memory space, (i.e if the program is controlling
the display or an arm or something) allow the program to specify a
bank, and give the program income.

A bank is a special variable that can't be edited by programs normally
but can be spent. The bank of an outputing program  will be given
credit depending upon how well the system as whole is performing . If
it is doing well the amount of credit it gets would be above average,
poorly it would be below. After a certain time the resources will need
to be bid for again. So credit is coming into the system and
continually being sunk.

The system will be seeded with programs that can perform rudimentarily
well. E.g. you will have programs that know how to deal with visual
input and they will bid for the video camera interupt. They will then
sell their services for credit (so that they can bid for the interrupt
again), to a program that correlates visual and auditory responses.
Who sell their services to a high level planning module etc, on down
to the arm that actually gets the credit.

All these modules are subject to change and re-evaluation. They merely
suggest one possible way for it to be used. It is supposed to be
ultimately flexible. You could seed it with a self-replicating neural
simulator that tried to hook its inputs and outputs up to other
neurons. Neurons would die out if they couldn't find anything to do.

 How to do this? Form and economy based on
 reinforcement signals, those that get more reinforcement signals can
 outbid the others for control of system resources.

 Where do reinforcement signals come from? What does this specification
 improve over natural evolution that needed billions of years to get
 here (that is, why do you expect any results in the forseable future)?

Most of the internals are programmed by humans, and they can be
arbitrarily complex. The feedback comes from a human, or from a
utility function although those are harder to define. The architecture
simply doesn't restrict the degrees of freedom that the programs
inside it can explore.


 This is obviously reminiscent of tierra and a million and one other
 alife system. The difference being is that I want the whole system to
 exhibit intelligence. Any form of variation is allowed, from random to
 getting in programs from the outside. It should be able to change the
 whole from the OS level up based on the variation.

 What is your meaning of `intelligence'? I now see it as merely the
 efficiency of optimization process that drives the environment towards
 higher utility, according to whatever criterion (reinforcement, in
 your case). In this view, how does I'll do the same, but with
 intelligence differ from I'll do the same, but better?

Terran's artificial chemistry as whole could not be said to have a
goal. Or to put it another way applying the intentional stance to it
probably wouldn't help you predict what it did next. Applying the
intentional stance to what my system does should help you predict what
it does.

This means he needs to use a bunch more resources to get a singular
useful system. Also the system might not do what he wants, but I don't
think he minds about that.

I'm allowing humans to design everything, just allowing the very low
level to vary. Is this clearer?

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Vladimir Nesov
On Tue, Jul 1, 2008 at 1:31 AM, William Pearson [EMAIL PROTECTED] wrote:
 2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:

 It is a wrong level of organization: computing hardware is the physics
 of computation, it isn't meant to implement specific algorithms, so I
 don't quite see what you are arguing.


 I'm not implementing a specific algorithm I am controlling how
 resources are allocated. Currently architecture does whatever the
 kernel says, from memory allocation to irq allocation. Instead of this
 my architecture would allow any program to bid credit for a resource.
 The one that bids the most wins and spends its credit. Certain
 resources like output memory space, (i.e if the program is controlling
 the display or an arm or something) allow the program to specify a
 bank, and give the program income.

 A bank is a special variable that can't be edited by programs normally
 but can be spent. The bank of an outputing program  will be given
 credit depending upon how well the system as whole is performing . If
 it is doing well the amount of credit it gets would be above average,
 poorly it would be below. After a certain time the resources will need
 to be bid for again. So credit is coming into the system and
 continually being sunk.

 The system will be seeded with programs that can perform rudimentarily
 well. E.g. you will have programs that know how to deal with visual
 input and they will bid for the video camera interupt. They will then
 sell their services for credit (so that they can bid for the interrupt
 again), to a program that correlates visual and auditory responses.
 Who sell their services to a high level planning module etc, on down
 to the arm that actually gets the credit.

 All these modules are subject to change and re-evaluation. They merely
 suggest one possible way for it to be used. It is supposed to be
 ultimately flexible. You could seed it with a self-replicating neural
 simulator that tried to hook its inputs and outputs up to other
 neurons. Neurons would die out if they couldn't find anything to do.

Well, yes, you implement some functionality, but why would you
contrast it with underlying levels (hardware, OS)? Like Java virtual
machine, your system is a platform, and it does some things not
handled by lower levels, or, in this case, by any superficially
analogous platforms.


 How to do this? Form and economy based on
 reinforcement signals, those that get more reinforcement signals can
 outbid the others for control of system resources.

 Where do reinforcement signals come from? What does this specification
 improve over natural evolution that needed billions of years to get
 here (that is, why do you expect any results in the forseable future)?

 Most of the internals are programmed by humans, and they can be
 arbitrarily complex. The feedback comes from a human, or from a
 utility function although those are harder to define. The architecture
 simply doesn't restrict the degrees of freedom that the programs
 inside it can explore.

If internals are programmed by humans, why do you need automatic
system to assess them? It would be useful if you needed to construct
and test some kind of combination/setting automatically, but not if
you just test manually-programmed systems. How does the assessment
platform help in improving/accelerating the research?


 This is obviously reminiscent of tierra and a million and one other
 alife system. The difference being is that I want the whole system to
 exhibit intelligence. Any form of variation is allowed, from random to
 getting in programs from the outside. It should be able to change the
 whole from the OS level up based on the variation.

 What is your meaning of `intelligence'? I now see it as merely the
 efficiency of optimization process that drives the environment towards
 higher utility, according to whatever criterion (reinforcement, in
 your case). In this view, how does I'll do the same, but with
 intelligence differ from I'll do the same, but better?

 Terran's artificial chemistry as whole could not be said to have a
 goal. Or to put it another way applying the intentional stance to it
 probably wouldn't help you predict what it did next. Applying the
 intentional stance to what my system does should help you predict what
 it does.

What is `intentional stance'? Intentional stance of what? What is it good for?


 This means he needs to use a bunch more resources to get a singular
 useful system. Also the system might not do what he wants, but I don't
 think he minds about that.

 I'm allowing humans to design everything, just allowing the very low
 level to vary. Is this clearer?

What do you mean by varying low level, especially in human-designed systems?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread William Pearson
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
 On Tue, Jul 1, 2008 at 1:31 AM, William Pearson [EMAIL PROTECTED] wrote:
 2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:

 It is a wrong level of organization: computing hardware is the physics
 of computation, it isn't meant to implement specific algorithms, so I
 don't quite see what you are arguing.


 I'm not implementing a specific algorithm I am controlling how
 resources are allocated. Currently architecture does whatever the
 kernel says, from memory allocation to irq allocation. Instead of this
 my architecture would allow any program to bid credit for a resource.
 The one that bids the most wins and spends its credit. Certain
 resources like output memory space, (i.e if the program is controlling
 the display or an arm or something) allow the program to specify a
 bank, and give the program income.

 A bank is a special variable that can't be edited by programs normally
 but can be spent. The bank of an outputing program  will be given
 credit depending upon how well the system as whole is performing . If
 it is doing well the amount of credit it gets would be above average,
 poorly it would be below. After a certain time the resources will need
 to be bid for again. So credit is coming into the system and
 continually being sunk.

 The system will be seeded with programs that can perform rudimentarily
 well. E.g. you will have programs that know how to deal with visual
 input and they will bid for the video camera interupt. They will then
 sell their services for credit (so that they can bid for the interrupt
 again), to a program that correlates visual and auditory responses.
 Who sell their services to a high level planning module etc, on down
 to the arm that actually gets the credit.

 All these modules are subject to change and re-evaluation. They merely
 suggest one possible way for it to be used. It is supposed to be
 ultimately flexible. You could seed it with a self-replicating neural
 simulator that tried to hook its inputs and outputs up to other
 neurons. Neurons would die out if they couldn't find anything to do.

 Well, yes, you implement some functionality, but why would you
 contrast it with underlying levels (hardware, OS)?

 Like Java virtual
 machine, your system is a platform, and it does some things not
 handled by lower levels, or, in this case, by any superficially
 analogous platforms.

Because I want it done in silicon at some stage. It is also assumed to
be the whole system, that is no other significant programs on it.
Machines that run lisp natively have been made, this makes the most
sense as the whole computer. Rather than as a component.


 How to do this? Form and economy based on
 reinforcement signals, those that get more reinforcement signals can
 outbid the others for control of system resources.

 Where do reinforcement signals come from? What does this specification
 improve over natural evolution that needed billions of years to get
 here (that is, why do you expect any results in the forseable future)?

 Most of the internals are programmed by humans, and they can be
 arbitrarily complex. The feedback comes from a human, or from a
 utility function although those are harder to define. The architecture
 simply doesn't restrict the degrees of freedom that the programs
 inside it can explore.

 If internals are programmed by humans, why do you need automatic
 system to assess them? It would be useful if you needed to construct
 and test some kind of combination/setting automatically, but not if
 you just test manually-programmed systems. How does the assessment
 platform help in improving/accelerating the research?


Because to be interesting the human specified programs need to be
autogenous, as in Josh Storr Hall's terminology, which means
self-building. Capable of altering the stuff they are made of. In this
case machine code equivalent. So you need the human to assess the
improvements the system makes, for whatever purpose the human wants
the system to perform.

 This is obviously reminiscent of tierra and a million and one other
 alife system. The difference being is that I want the whole system to
 exhibit intelligence. Any form of variation is allowed, from random to
 getting in programs from the outside. It should be able to change the
 whole from the OS level up based on the variation.

 What is your meaning of `intelligence'? I now see it as merely the
 efficiency of optimization process that drives the environment towards
 higher utility, according to whatever criterion (reinforcement, in
 your case). In this view, how does I'll do the same, but with
 intelligence differ from I'll do the same, but better?

 Terran's artificial chemistry as whole could not be said to have a
 goal. Or to put it another way applying the intentional stance to it
 probably wouldn't help you predict what it did next. Applying the
 intentional stance to what my system does should help you predict what
 it does.

 What is 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Ben Goertzel
I wrote a book about the emergence of spontaneous creativity from
underlying complex dynamics.  It was published in 1997 with the title
From Complexity to Creativity.  Some of the material is dated but I
still believe the basic ideas make sense.  Some of the main ideas were
reviewed in The Hidden Pattern (2006).  I don't have time to review
the ideas right now (I'm in an airport during a flight change doing a
quick email check) but suffice to say that I did put a lot of thought
and analysis into how spontaneous creativity emerges from complex
cognitive systems.  So have others.  It is not a total mystery, as
mysterious as the experience can seem subjectively.

-- Ben G

On Mon, Jun 30, 2008 at 1:32 PM, Terren Suydam [EMAIL PROTECTED] wrote:

 Ben,

 I agree, an evolved design has limits too, but the key difference between a 
 contrived design and one that is allowed to evolve is that the evolved 
 critter's intelligence is grounded in the context of its own 'experience', 
 whereas the contrived one's intelligence is grounded in the experience of its 
 creator, and subject to the limitations built into that conception of 
 intelligence. For example, we really have no idea how we arrive at 
 spontaneous insights (in the shower, for example). A chess master suddenly 
 sees the game-winning move. We can be fairly certain that often, these 
 insights are not the product of logical analysis. So if our conception of 
 intelligence fails to explain these important aspects, our designs based on 
 those conceptions will fail to exhibit them. An evolved intelligence, on the 
 other hand, is not limited in this way, and has the potential to exhibit 
 intelligence in ways we're not capable of comprehending.

 [btw, I'm using the scare quotes around the word experience as it applies to 
 AGI because it's a controversial word and I hope to convey the basic idea 
 about experience without getting into technical details about it. I can get 
 into that, if anyone thinks it necessary, just didn't want to get bogged 
 down.]

 Furthermore, there are deeper epistemological issues with the difference 
 between design and self-organization that get into the notion of autonomy as 
 well (i.e., designs lack autonomy to the degree they are specified), but I'll 
 save that for when I feel like putting everyone to sleep :-]

 Terren

 PS. As an aside, I believe spontaneous insight is likely to be an example of 
 self-organized criticality, which is a description of the behavior of 
 earthquakes, avalanches, and the punctuated equilibrium model of evolution. 
 Which is to say, a sudden insight is like an avalanche of mental 
 transformations, triggered by some minor event but the result of a build-up 
 of dynamic tension. Self-organized criticality is
 explained by the late Per Bak in _How Nature Works_, a short, excellent read 
 and an brilliant example of scientific and mathematical progress in the realm 
 of complexity.

 --- On Mon, 6/30/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 I agree that all designed systems have limitations, but I
 also suggest
 that all evolved systems have limitations.

 This is just the no free lunch theorem -- in
 order to perform better
 than random search at certain optimization tasks, a system
 needs to
 have some biases built in, and these biases will cause it
 to work
 WORSE than random search on some other optimization tasks.

 No AGI based on finite resources will ever be **truly**
 general, be it
 an engineered or evolved systems

 Evolved systems are far from being beyond running into dead
 ends ...
 their adaptability is far from infinite ... the
 evolutionary process
 itself may be endlessly creative, but in that sense so may
 be the
 self-modifying process of an engineered AGI ...

 -- Ben G

 On Mon, Jun 30, 2008 at 3:17 AM, Terren Suydam
 [EMAIL PROTECTED] wrote:
 
  --- On Mon, 6/30/08, Ben Goertzel
 [EMAIL PROTECTED] wrote:
  but I don't agree that predicting **which**
 AGI designs can lead
  to the emergent properties corresponding to
 general intelligence,
  is pragmatically impossible to do in an analytical
 and rational way ...
 
  OK, I grant you that you may be able to do that. I
 believe that we can be extremely clever in this regard. An
 example of that is an implementation of a Turing Machine
 within the Game of Life:
 
  http://rendell-attic.org/gol/tm.htm
 
  What a beautiful construction. But it's completely
 contrived. What you're suggesting is equivalent, because
 your design is contrived by your own intelligence. [I
 understand that within the Novamente idea is room for
 non-deterministic (for practical purposes) behavior, so it
 doesn't suffer from the usual complexity-inspired
 criticisms of purely logical systems.]
 
  But whatever achievement you make, it's just one
 particular design that may prove effective in some set of
 domains. And there's the rub - the fact that your
 design is at least partially static will limit its
 applicability in some set of domains. I 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Terren Suydam

Hi Will,

--- On Mon, 6/30/08, William Pearson [EMAIL PROTECTED] wrote:
 The only way to talk coherently about purpose within
 the computation is to simulate self-organized, embodied
 systems.
 
 I don't think you are quite getting my system. If you
 had a bunch of
 programs that did the following
 
 1) created new programs, by trial and error and taking
 statistics of
 variables or getting arbitrary code from the outside.
 2) communicated with each other to try and find programs
 that perform
 services they need.
 3) Bid for computer resources, if a program loses its
 memory resources
 it is selected against, in a way.
 
 Would this be sufficiently self-organised? If not, why not?
 And the
 computer programs would be as embodied as your virtual
 creatures. They
 would just be embodied within a tacit economy, rather than
 an
 artificial chemistry.

It boils down to your answer to the question: how are the resources ultimately 
allocated to the programs?  If you're the one specifying it, via some heuristic 
or rule, then the purpose is driven by you. If resource allocation is handled 
by some self-organizing method (this wasn't clear in the article you provided), 
then I'd say that the system's purpose is self-defined.

As for embodiment, my question is, how do your programs receive input?  
Embodiment, as I define it, requires that inputs are merely reflections of 
state variables, and not even labeled in any way... i.e. we can't pre-define 
ontologies. The embodied entity starts from the most unstructured state 
possible and self-structures whatever inputs it receives.

That said, you may very well be doing that and be creating embodied programs in 
this way... if so, that's cool because I hadn't considered that possibility and 
I'll be interested to see how you fare.
 
  You are right that starting with bacteria is too
 indirect, if your goal is to achieve AGI in something like
 decades. It would certainly take an enormous amount of time
 and computation to get from there to human-level AI and
 beyond, perhaps a hundred years or more. But you're
 asking, aren't there shortcuts we can take that
 don't limit the field of potential intelligence in
 important ways.
 
 If you take this attitude you would have to ask yourself whether
 implementing your simulation on a classical computer is not cutting
 off the ability to create intelligence. Perhaps quantum affects are
 important in whether a system can produce intelligence. Protein
 folding probably wouldn't be the same.

Computation per se has little to do with the potential to create intelligent 
systems. Computation is only a framework that supports the simulation of 
virtual environments, in which intelligence may emerge. You could in principle 
build that computer out of tinker toys, or as an implementation of a Turing 
machine in Conway's Game of Life. The substrate doesn't matter, so long as it 
can compute.

As for quantum effects, it's possible there's something there with respect to 
protein folding, probable even. But I strongly distrust attempts to locate the 
non-deterministic behavior required of autonomous systems in the domain of 
quantum uncertainty. Every phenomenon above the scale of molecular dynamics is 
far too large to be impacted by anything but statistical behaviors. Individual 
quantal events lose all practical meaning at that level. Because intelligence, 
in my estimation, is at least partially dependent on global notions of 
emergence and complexity, quantum effects contribute absolutely nothing to my 
model. 

 You have to at some point simplify. I'm going to have
 my system have
 as many degrees of freedom to vary as a stored program
 computer (or as
 near as I can make it). Whilst having the internal programs
 self-organise and vary in ways that would make a normal
 stored program
 computer become unstable.  Any simulations you do on a
 computer cannot
 have any more degrees of freedom.

I disagree, but would like to see your response to the above before diving into 
such esoterica.

  For example, starting with bacteria means we have to
 let multi-cellular organisms evolve on their own in a
 virtual geometry. That project alone is an enormous
 challenge. So let's skip it and go right to the
 multi-cellular design. The trouble is, our design of the
 multi-cellular organism is limiting. Alternative designs
 become impossible.
 
 What do you mean by design here? Do you mean an abstract
 multicellular
 cell model or do you mean design as in what Tom Ray (you do
 know
 Tierra right, I can use this as a common language?) did
 with his first
 self replicator, by creating an artificial genome. I can
 see problems
 with the first in restricting degrees of freedom, but the
 second, the
 degrees of freedom are still there to be acted on by the
 pressures of
 variation within the system. Even though Tom Ray built a
 certain type
 of replicator, they still managed to replicate in other
 ways, the one
 I can remember is stealing other 

RE: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread John G. Rose
Could you say that it takes a complex system to know a complex system? If an
AGI is going to try to say predict the weather, it doesn't have infinite cpu
cycles to simulate so it'll have to come up with something better. Sure it
can build a probabilistic historical model but that is kind of cheating. So
for it to emulate the weather, I think, or to semi-understand it there has
to be some complex systems activity going on there in its cognition. No?

I'm not sure that this what Richard is taking about but an AGI is going to
bump into complex systems all over the place. Also it will encounter what
seems to be complex and later on it may determine that it is not. And
perhaps, a key component in the cognition engine in order for it to
understand complexity differentials in systems from a relationist standpoint
it would need some sort of complexity .. not a comparator but a...sort of
harmonic leverage. Can't think of the right words

Either way this complexity thing is getting rather annoying because on one
hand you think it can drasticly enhance an AGI and is required and on the
other hand you think it is unnecessary - I'm not talking about creativity or
thought emergence or similar but complexity as integral component in a
computational cognition system.

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Terren Suydam

Ben,

Be that as it may, spontaneous insight was just one example of an aspect of 
human intelligence that's not well understood. I'll give you another one that 
is more difficult to theorize about - I assume you've heard of the savant 
Daniel Tammet who is able to do amazing feats of computation and memory?  He 
appears to be doing these things in a way that does not involve algorithmic 
computation, as vetted by Vilayanur Ramachandran. Tammet is clearly one of the 
most intelligent people on the planet by many measures, and he does it in ways 
we don't understand.

This isn't an invitation to theorize about Tammet's cognitive machinery, as 
interesting as that exercise might be. It's to make the larger point that we 
may be so immersed in our own conceptualizations of intelligence - particularly 
because we live in our models and draw on our own experience and introspection 
to elaborate them - that we may have tunnel vision about the possibilities for 
better or different models. Or, we may take for granted huge swaths of what 
makes us so smart, because it's so familiar, or below the radar of our 
conscious awareness, that it doesn't even occur to us to reflect on it. A 
perfect example of that is how we acquire language (our first language). 
Introspection is not available to us there, so all we have is theory. And even 
when introspection *is* available to us, we may fall prey to the self-deception 
that is such an integral part of human psychology.

In short, claiming that your particular design is capable of AGI is quite a 
bold claim, because of all the possible pitfalls involved with theorizing about 
human-level intelligence. Given that the graveyard of AI's history is strewn 
with the bones of outrageous boasts and predictions, it's too tempting to see 
Novamente as just the latest in a long lineage. Why do we insist on shooting 
for the moon, when we still can't even explain the brain of a housefly?

One of the best reasons to go with an evolving-design approach is that we're 
not pretending we're going to get to human-level AI on the first shot. Instead, 
we gradually build up the complexity of our creations, building on prior 
successes and milestones. We see the evolution of intelligence as it becomes 
progressively more complicated. Instead of leaping off a cliff (like Icarus), 
we climb a mountain (like Sisyphus ;-). Progress is measurable and reflects the 
graduated spectrum of intelligence, a nuance that has never been fashionable.

Terren


--- On Mon, 6/30/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 I wrote a book about the emergence of spontaneous creativity
 from
 underlying complex dynamics.  It was published in 1997 with
 the title
 From Complexity to Creativity.  Some of the
 material is dated but I
 still believe the basic ideas make sense.  Some of the main
 ideas were
 reviewed in The Hidden Pattern (2006).  I
 don't have time to review
 the ideas right now (I'm in an airport during a flight
 change doing a
 quick email check) but suffice to say that I did put a lot
 of thought
 and analysis into how spontaneous creativity emerges from
 complex
 cognitive systems.  So have others.  It is not a total
 mystery, as
 mysterious as the experience can seem subjectively.
 
 -- Ben G
 
 On Mon, Jun 30, 2008 at 1:32 PM, Terren Suydam
 [EMAIL PROTECTED] wrote:
 
  Ben,
 
  I agree, an evolved design has limits too, but the key
 difference between a contrived design and one that is
 allowed to evolve is that the evolved critter's
 intelligence is grounded in the context of its own
 'experience', whereas the contrived one's
 intelligence is grounded in the experience of its creator,
 and subject to the limitations built into that conception
 of intelligence. For example, we really have no idea how we
 arrive at spontaneous insights (in the shower, for example).
 A chess master suddenly sees the game-winning move. We can
 be fairly certain that often, these insights are not the
 product of logical analysis. So if our conception of
 intelligence fails to explain these important aspects, our
 designs based on those conceptions will fail to exhibit
 them. An evolved intelligence, on the other hand, is not
 limited in this way, and has the potential to exhibit
 intelligence in ways we're not capable of
 comprehending.
 
  [btw, I'm using the scare quotes around the word
 experience as it applies to AGI because it's a
 controversial word and I hope to convey the basic idea
 about experience without getting into technical details
 about it. I can get into that, if anyone thinks it
 necessary, just didn't want to get bogged down.]
 
  Furthermore, there are deeper epistemological issues
 with the difference between design and self-organization
 that get into the notion of autonomy as well (i.e., designs
 lack autonomy to the degree they are specified), but
 I'll save that for when I feel like putting everyone to
 sleep :-]
 
  Terren
 
  PS. As an aside, I believe spontaneous insight is
 likely 

Savants and user-interfaces [was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Linas Vepstas
2008/6/30 Terren Suydam [EMAIL PROTECTED]:

 savant

I've always theorized that savants can do what they do because
they've been able to get direct access to, and train, a fairly
small number of neurons in their brain, to accomplish highly
specialized (and thus rather unusual) calculations.

I'm thinking specifically of Ramanujan, the Hindi mathematician.
He appears to have had access to a multiply-add type circuit
in his brain, and could do symbolic long division and
multiplication as a result -- I base this on studying some of
the things he came up with -- after a while, it seems to be
clear  how he came up with it (even if the feat is clearly not
reproducible).

In a sense, similar feats are possible by using a modern
computer with a good algebra system.  Simon Plouffe seems
to be a modern-day example of this: he noodles around with
his systems, and finds various interesting relationships that
would otherwise be obscure/unknown.  He does this without
any particularly deep or expansive training in math (whence
some of his friction with real academics).  If Simon could
get a computer-algebra chip implanted in his brain, (i.e.
with a very, very user-freindly user-interface) so that he
could work the algebra system just by thinking about it,
I bet his output would resemble that of Ramanujan a whole
lot more than it already does -- as it were, he's hobbled by
a crappy user interface.

Thus, let me theorize: by studying savants with MRI and
what-not, we may find a way of getting a much better
man-machine interface.  That is, currently, electrodes
are always implanted in motor neurons (or visual cortex, etc)
i.e. in places of the brain with very low levels of abstraction
from the real word. It would be interesting to move up the
level of abstraction, and I think that studying how savants
access the magic circuits in thier brain will open up a
method for high-level interfaces to external computing
machinery.

--linas


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Savants and user-interfaces [was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Vladimir Nesov
Interesting: is it possible to train yourself to run a specially
designed nontrivial inference circuit based on low-base
transformations (e.g. binary)? You start by assigning unique symbols
to its nodes, train yourself to stably perform associations
implementing its junctions, and then assemble it all together by
training yourself to generate a problem as a temporal sequence
(request), so that it can be handled by the overall circuit, and
training to read out the answer and convert it to sequence of e.g.
base-10 digits or base-100 words keying pairs of digits (like in
mnemonic)? Has anyone heard of this attempted? At least the initial
steps look straightforward enough, what kind of obstacles this kind of
experiment can run into?

On Tue, Jul 1, 2008 at 7:43 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
 2008/6/30 Terren Suydam [EMAIL PROTECTED]:

 savant

 I've always theorized that savants can do what they do because
 they've been able to get direct access to, and train, a fairly
 small number of neurons in their brain, to accomplish highly
 specialized (and thus rather unusual) calculations.

 I'm thinking specifically of Ramanujan, the Hindi mathematician.
 He appears to have had access to a multiply-add type circuit
 in his brain, and could do symbolic long division and
 multiplication as a result -- I base this on studying some of
 the things he came up with -- after a while, it seems to be
 clear  how he came up with it (even if the feat is clearly not
 reproducible).

 In a sense, similar feats are possible by using a modern
 computer with a good algebra system.  Simon Plouffe seems
 to be a modern-day example of this: he noodles around with
 his systems, and finds various interesting relationships that
 would otherwise be obscure/unknown.  He does this without
 any particularly deep or expansive training in math (whence
 some of his friction with real academics).  If Simon could
 get a computer-algebra chip implanted in his brain, (i.e.
 with a very, very user-freindly user-interface) so that he
 could work the algebra system just by thinking about it,
 I bet his output would resemble that of Ramanujan a whole
 lot more than it already does -- as it were, he's hobbled by
 a crappy user interface.

 Thus, let me theorize: by studying savants with MRI and
 what-not, we may find a way of getting a much better
 man-machine interface.  That is, currently, electrodes
 are always implanted in motor neurons (or visual cortex, etc)
 i.e. in places of the brain with very low levels of abstraction
 from the real word. It would be interesting to move up the
 level of abstraction, and I think that studying how savants
 access the magic circuits in thier brain will open up a
 method for high-level interfaces to external computing
 machinery.

 --linas


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Savants and user-interfaces [was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Linas Vepstas
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
 Interesting: is it possible to train yourself to run a specially
 designed nontrivial inference circuit based on low-base
 transformations (e.g. binary)?

Why binary?

I once skimmed a biography of Ramanujan, he started
multiplying numbers in his head as a pre-teen. I suspect
it was grindingly boring, but given the surroundings, might
have been the most fun thing he could think of.   If you're
autistic, then focusing obsessively on some task might
be a great way to pass the time, but if you're more or less
normal, I doubt you'll get very far with obsessive-compulsive
self-training -- and that's the problem, isn't it?

--linas


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Savants and user-interfaces [was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Vladimir Nesov
On Tue, Jul 1, 2008 at 8:31 AM, Linas Vepstas [EMAIL PROTECTED] wrote:

 Why binary?

 I once skimmed a biography of Ramanujan, he started
 multiplying numbers in his head as a pre-teen. I suspect
 it was grindingly boring, but given the surroundings, might
 have been the most fun thing he could think of.   If you're
 autistic, then focusing obsessively on some task might
 be a great way to pass the time, but if you're more or less
 normal, I doubt you'll get very far with obsessive-compulsive
 self-training -- and that's the problem, isn't it?


If the signals have properties of their own, I'm afraid they will
start interfering with each other, which won't allow the circuit to
execute in real time. Binary signals, on the other hand, can be
encoded by the activation of nodes of the circuit, active/inactive. If
you have an AND gate that leads from symbols S1 and S2 to S3, you
learn to remember S3 only when you see both S1 and S2 (probably you'll
still need complementary symbol to develop negative, so you'll also
need -S1, -S2 and -S3, so that -S3 is activated (recalled) when you
see S1 and -S2, whole table. You'll also need separate symbols for
each node in each gate. Probably randomly generated hieroglyph-like
symbols are a good way to create new categories in the mind for new
nodes in the circuit, and also to train yourself to recall the right
answers on the gates, by drawing them together.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Ed Porter
 be modified to perform a lot of tasks for which many
people in AI still think there is no method for solving, such as complex
context-appropriate inferencing.  Hinton's paper shows that neural net
learning is suddenly much more powerful than it has been before.  And
Hecht-Neilsen's paper shows another powerful form or neural net-like
learning and computing that scales well.

The convergence of such much more sophisticated software approaches and the
much more powerful hardware necessary to actually build minds that use them
is much more than just a belief.  

Today, for $33K you can buy a system I talked about in my email which
started this thread.  It has 126Gbytes of RAM and roughly 160Million random
RAM access/second.  This is enough power to start building small toy AGI
mind that could show limited generalized learning, perception, inferencing,
planning, behaviors, attention focusing, and behavior selection, i.e.,
something like Ben's pet brains.  The  $850K system would allow
substantially more sophisticated demonstrations of artificial minds to be
created. 

This combination of much more sophisticated understandings for how to build
AGI's, combined with much more powerful hardware is something new.  And,
much, much more powerful hardware should be arriving in about 6 years when
multi-level chips with mesh-networked, massively-mutli-cored processors, and
8 or more layers of memory connected to the processors with many thousands
of though silicon vias, and with hundreds of high speed channels to external
memory and other such multi-level chips will hopefully become routinely
available.

Richard, a lot has changed since the '70, '80, '90s, and early '00s --- and
if you do see it --- that's your problem.

Ed Porter




-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Saturday, June 28, 2008 4:14 PM
To: agi@v2.listbox.com
Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE
IN AGI

Ed Porter wrote:
 I do not claim the software architecture for AGI has been totally solved.
 But I believe that enough good AGI approaches exist (and I think Novamente
 is one) that when powerful hardware available to more people we will be
able
 to relatively quickly get systems up and running that demonstrate the
parts
 of the problems we have solved.  And that will provide valuable insights
and
 test beds for solving the parts of the problem that we have not yet
solved.

You are not getting my point.  What you just said was EXACTLY what was 
said in 1970, 1971, 1972, 1973 ..2003, 2004, 2005, 2006, 2007 ..

And every time it was said, the same justification for the claim was 
given:  I just have this belief that it will work.

Plus ca change, plus c'est la meme fubar.





 With regard to your statement the problem is understanding HOW TO DO IT
 ---
 WE DO UNDERSTAND HOW TO DO IT --- NOT ALL OF IT --- AND NOT HOW TO MAKE IT
 ALL WORK TOGETHER WELL AUTOMATICALLY --- BUT --- GIVEN THE TYPE OF
HARDWARE
 EXPECTED TO COST LESS THAN $3M IN 6 YEARS --- WE KNOW HOW TO BUILD MUCH OF
 IT --- ENOUGH THAT WE COULD PROVIDE EXTREMELY VALUABLE COMPUTERS WITH OUR
 CURRENT UNDERSTANDINGS.

You do *not* understand how to do it.  But I have to say that statements 
like your paragraph above are actually very good for my health, because 
their humor content is right up there in the top ten, along with Eddie 
Izzard's Death Star Canteen sketch and Stephen Colbert at the 2006 White 
House Correspondents' Association Dinner.

So long as the general response to the complex systems problem is not 
This could be a serious issue, let's put our heads together to 
investigate it, but My gut feeling is that this is just not going to 
be a problem, or Quit rocking the boat!, you can bet that nobody 
really wants to ask any questions about whether the approaches are 
correct, they just want to be left alone to get on with their 
approaches.  History, I think, will have some interesting things to say 
about all this.

Good luck anyway.



Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Mike Tintner
Ed:Another reason for optimism is Hintons new work described in papers such 
as

Modeling image patches with a directed hierarchy of Markov random fields
by Simon Osindero and Geoffrey Hinton and the Google Tech Talk at
http://www.youtube.com/watch?v=AyzOUbkUf3M.  Hinton has shown how to
automatically learn hierarchical neural nets that have 2000 hidden nodes in
one layer, 500 in the next, and 1000 in the top layer

Comment from a pal on Hinton who was similarly recommended on slashdot:(I'm 
ignorant here):


I also took a closer look at the Hinton stuff that the slashdot poster made 
reference to. To call this DBN stuff highly advanced over Hawkins is 
ridiculous. I looked at it already a couple of months ago. It took Hinton 
***17-years*** - by his own admission - to figure out how to build a 
connectionist net that could reliably identify variations of handwritten 
numbers 1-9. And it's gonna take him about a MILLION more years to do 
general AI with this approach. Gakk.
To me, the biggest problem with connectionist networks is all they ever 
solve are toy problems - and it's 20 years after connectionism become 
popular again.





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Ed Porter
Mike,

None of the Hawkin's papers I have read have given any results as impressive
as the Hinton papers I cited.  If you know of some that have, please send me
references to the most impressive among them.

Hinton says he believe his system could scale efficiently to much larger
nets.  If that is true, a system having multiples of his modules would
appear possibly able to learn how to handle a good chunk of sensory
perception. 

Like, Ben I am not wed to a totally connectionist approach, but rather one
that has attributes of both connectionist and symbolic approaches.  I
personally like to think in terms of systems where I have some idea what
things represent, so I can think in terms of what I want them to do.

But still I am impressed with what Hinton has shown, particularly if it can
be made to scale well to much larger systems.  

Ed Porter

-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Sunday, June 29, 2008 2:48 PM
To: agi@v2.listbox.com
Cc: [EMAIL PROTECTED]
Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE
IN AGI

Ed:Another reason for optimism is Hintons new work described in papers such 
as
Modeling image patches with a directed hierarchy of Markov random fields
by Simon Osindero and Geoffrey Hinton and the Google Tech Talk at
http://www.youtube.com/watch?v=AyzOUbkUf3M.  Hinton has shown how to
automatically learn hierarchical neural nets that have 2000 hidden nodes in
one layer, 500 in the next, and 1000 in the top layer

Comment from a pal on Hinton who was similarly recommended on slashdot:(I'm 
ignorant here):

I also took a closer look at the Hinton stuff that the slashdot poster made

reference to. To call this DBN stuff highly advanced over Hawkins is 
ridiculous. I looked at it already a couple of months ago. It took Hinton 
***17-years*** - by his own admission - to figure out how to build a 
connectionist net that could reliably identify variations of handwritten 
numbers 1-9. And it's gonna take him about a MILLION more years to do 
general AI with this approach. Gakk.
To me, the biggest problem with connectionist networks is all they ever 
solve are toy problems - and it's 20 years after connectionism become 
popular again.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Derek Zahn
I agree that the hardware advances are inspirational, and it seems possible 
that just having huge hardware around could change the way people think and 
encourage new ideas.
 
But what I'm really looking forward to is somebody producing a very impressive 
general intelligence result that was just really annoying because it took 10 
days of computing instead of an hour.
 
Seems to me that all the known AGI researchers are in theory, design, or system 
building phases; I don't think any of them are CPU-bound at present -- and no 
fair pointing to Goedel Machines or AIXI either, which will ALWAYS be 
resource-starved :)


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Richard Loosemore

Ben Goertzel wrote:

Richard,


So long as the general response to the complex systems problem is not This
could be a serious issue, let's put our heads together to investigate it,
but My gut feeling is that this is just not going to be a problem, or
Quit rocking the boat!, you can bet that nobody really wants to ask any
questions about whether the approaches are correct, they just want to be
left alone to get on with their approaches.


Both Ed Porter and myself have given serious thought to the complex systems
problem as you call it, and have discussed it with you at length.  I
also read the
only formal paper you sent me dealing with it (albeit somewhat
indirectly) and also
your various online discourses on the topic.

Ed and I don't agree with you on the topic, but not because of lack of thinking
or attention.

Your argument FOR the existence of a complex systems problem with Novamente
or OpenCog, is not any more rigorous than our argument AGAINST it.


Oh, mere rhetoric.

You have never given an argument against it.  If you believe this is 
not correct, perhaps you could jog my memory by giving a brief summary 
of what you think is the argument against it?


In all of my discussions with you on the subject, you have introduced 
many red herrings, and we have discussed many topics that turned out to 
be just misunderstandings, but you have never addressed the actual core 
argument itself.


In fact, IIRC, on the one occasion that I persisted in trying to bring 
the discussion back to the core issue, you finally made only one 
argument against my core claim  your argument against it was I just 
don't think it is going to be a problem.


The argument itself is extremely rigorous:  on all the occasions on 
which someone has disputed the rigorousness of the argument, they have 
either addressed some other issue entirely or they have just waved their 
hands without showing any sign of understanding the argument, and then 
said ... it's not rigorous!.  It is almost comical to go back over the 
various responses to the argument:  not only do people go flying off in 
all sorts of bizarre directions, but they also get quite strenuous about 
it at the same time.


Not understanding an argument is not the same as the argument not being 
rigorous.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Bryan Bishop
On Friday 27 June 2008, Richard Loosemore wrote:
 Pardon my fury, but the problem is understanding HOW TO DO IT, and
 HOW TO BUILD THE TOOLS TO DO IT, not having expensive hardware.  So
 long as some people on this list repeat this mistake, this list will
 degenerate even further into obsolescence.

I am working on this issue, but it will not look like ai from your 
perspective. It is, in a sense, ai. Here's the tool approach:

http://heybryan.org/buildingbrains.html
http://heybryan.org/exp.html

Sort of.

- Bryan

http://heybryan.org/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Ben Goertzel
 The argument itself is extremely rigorous:  on all the occasions on which
 someone has disputed the rigorousness of the argument, they have either
 addressed some other issue entirely or they have just waved their hands
 without showing any sign of understanding the argument, and then said ...
 it's not rigorous!.  It is almost comical to go back over the various
 responses to the argument:  not only do people go flying off in all sorts of
 bizarre directions, but they also get quite strenuous about it at the same
 time.

Richard, if your argument is so rigorous, why don't you do this: present
a brief, mathematical formalization of your argument, defining all terms
precisely and carrying out all inference steps exactly, at the level
of a textbook
mathematical proof.

I'll be on vacation for the next 2 weeks w/limited and infrequent email access,
so I'll look out for this when I return.

If you present your argument this way, then you can rest assured I will
understand it, as I'm capable to understand math; then, our arguments can
be more neatly directed ... toward the appropriateness of your formal
definitions and assumptions...

-- Ben G


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Richard Loosemore
 virtually impossible to train a neural net with so many hidden
nodes, but Hinton's new method allows rapid largely automatic training of
such large networks, enabling in the example show, surprisingly good
handwritten numeral recognition.


Don't have access to that paper right now, so can you tell me:  this 
goes beyond mere supervised learning, right?  And it solves the problem 
of representing multiple tokens?  And also the problem of encoding 
structured knowledge?  It doesn't represent structure with hard-coded 
templates, yes?  The technique scales well to full-scale thinking 
systems in which the domain is not restricted to, say, handwriting, but 
includes everything the system could ever want to recognize, yes?  Oh. 
and in case I forget, the images are not preselected, but are naturally 
occuring in context, so the system can recognize a letter A in a scene 
in which two people lean against one another and hold something 
horizontal between them at waist height?


I assume all the answers to the above were Yes!, so it sounds like a 
great leap forward:  I'll read the full paper tomorrow.


Pity that Hinton chose a title that implied all the answers were 'no'. 
Bit of an oversight on his part, but never mind.





Yet another example of the power of automatic learning is shown by
impressive success of Hecht-Nielsen confabulation system in generating a
second sentence that reasonably follows from first, as if it had been
written by a human intelligence, withoug any attempt to teach the rules of
grammar or any explicit semantic knowledge.  The system learns from text
corpora.

You may say this is narrow AI.  But it all has general applicability.  For
example, the type of hierarchical memory with max-pooling shown in Serre's
paper shows is an extremely powerful paradigm that addresses some of the
most difficult problems in AI, including robust non-literal matching.  Such
hierarchical memory can be modified to perform a lot of tasks for which many
people in AI still think there is no method for solving, such as complex
context-appropriate inferencing.  Hinton's paper shows that neural net
learning is suddenly much more powerful than it has been before.  And
Hecht-Neilsen's paper shows another powerful form or neural net-like
learning and computing that scales well

The convergence of such much more sophisticated software approaches and the
much more powerful hardware necessary to actually build minds that use them
is much more than just a belief.  


Today, for $33K you can buy a system I talked about in my email which
started this thread.  It has 126Gbytes of RAM and roughly 160Million random
RAM access/second.  This is enough power to start building small toy AGI
mind that could show limited generalized learning, perception, inferencing,
planning, behaviors, attention focusing, and behavior selection, i.e.,
something like Ben's pet brains.  The  $850K system would allow
substantially more sophisticated demonstrations of artificial minds to be
created. 


This combination of much more sophisticated understandings for how to build
AGI's, combined with much more powerful hardware is something new.  And,
much, much more powerful hardware should be arriving in about 6 years when
multi-level chips with mesh-networked, massively-mutli-cored processors, and
8 or more layers of memory connected to the processors with many thousands
of though silicon vias, and with hundreds of high speed channels to external
memory and other such multi-level chips will hopefully become routinely
available.

Richard, a lot has changed since the '70, '80, '90s, and early '00s --- and
if you do see it --- that's your problem.


Oh dear, Ed.  I just shouldn't get into discussions with you.  It's fun 
sometimes, but.



Back to work.




Richard Loosemore





Ed Porter




-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Saturday, June 28, 2008 4:14 PM

To: agi@v2.listbox.com
Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE
IN AGI

Ed Porter wrote:

I do not claim the software architecture for AGI has been totally solved.
But I believe that enough good AGI approaches exist (and I think Novamente
is one) that when powerful hardware available to more people we will be

able

to relatively quickly get systems up and running that demonstrate the

parts

of the problems we have solved.  And that will provide valuable insights

and

test beds for solving the parts of the problem that we have not yet

solved.

You are not getting my point.  What you just said was EXACTLY what was 
said in 1970, 1971, 1972, 1973 ..2003, 2004, 2005, 2006, 2007 ..


And every time it was said, the same justification for the claim was 
given:  I just have this belief that it will work.


Plus ca change, plus c'est la meme fubar.






With regard to your statement the problem is understanding HOW TO DO IT
---
WE DO UNDERSTAND HOW TO DO IT --- NOT ALL OF IT --- AND NOT HOW TO MAKE IT
ALL WORK

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Richard Loosemore

Ben Goertzel wrote:

The argument itself is extremely rigorous:  on all the occasions on which
someone has disputed the rigorousness of the argument, they have either
addressed some other issue entirely or they have just waved their hands
without showing any sign of understanding the argument, and then said ...
it's not rigorous!.  It is almost comical to go back over the various
responses to the argument:  not only do people go flying off in all sorts of
bizarre directions, but they also get quite strenuous about it at the same
time.


Richard, if your argument is so rigorous, why don't you do this: present
a brief, mathematical formalization of your argument, defining all terms
precisely and carrying out all inference steps exactly, at the level
of a textbook
mathematical proof.

I'll be on vacation for the next 2 weeks w/limited and infrequent email access,
so I'll look out for this when I return.

If you present your argument this way, then you can rest assured I will
understand it, as I'm capable to understand math; then, our arguments can
be more neatly directed ... toward the appropriateness of your formal
definitions and assumptions...


Mathematics is about formal systems.  The argument is not about formal 
systems, it is about real-world intelligent systems and their 
limitations, and about the very *question* of whether those intelligent 
systems are formal systems.  It is about whether scientific methodology 
(which is just the exercise of a particular subset of this thing we call 
'intelligence') is itself a formal system.  To formulate the argument in 
mathematical terms would, therefore, be to prejudge the answer to the 
question we are addressing - nothing could more silly than to insist on 
a mathematical formulation of it.


Asking for a mathematical formulation of an argument that has nothing to 
do with formal systems is, therefore, a sign that you have no 
understanding of what the argument is actually about.


Now, if it were anyone else I would say that you really did not 
understand, and were just, well ignorant.  But you actually do 
understand that point:  when you made the above request I think your 
goal was to engage in a piece of pure sophistry.  You cynically ask for 
something that you know has no relevance, and cannot be supplied, as an 
attempt at a put-down.  Nice try, Ben.



Or, then again . perhaps I am wrong:  maybe you really *cannot* 
understand anything except math?  Perhaps you have no idea what the 
actual argument is, and that has been the problem all along?  I notice 
that you avoided answering my request that you summarize your argument 
against the complex systems problem ... perhaps you are just confused 
about what the argument actually is, and have been confused right from 
the beginning?







Richard Loosemore




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Ben Goertzel
Richard,

I think that it would be possible to formalize your complex systems argument
mathematically, but I don't have time to do so right now.

 Or, then again . perhaps I am wrong:  maybe you really *cannot*
 understand anything except math?

It's not the case that I can only understand math -- however, I have a
lot of respect
for the power of math to clarify disagreements.  Without math, arguments often
proceed in a confused way because different people are defining terms
differently,a
and don't realize it.

But, I agree math is not the only kind of rigor.  I would be happy
with a very careful,
systematic exposition of your argument along the lines of Spinoza or the early
Wittgenstein.  Their arguments were not mathematical, but were very rigorous
and precisely drawn -- not slippery.

 Perhaps you have no idea what the actual
 argument is, and that has been the problem all along?  I notice that you
 avoided answering my request that you summarize your argument against the
 complex systems problem ... perhaps you are just confused about what the
 argument actually is, and have been confused right from the beginning?

In a nutshell, it seems you are arguing that general intelligence is
fundamentally founded
on emergent properties of complex systems, and that it's not possible for us to
figure out analytically how these emergent properties emerge from the
lower-level structures
and dynamics of the complex systems involved.   Evolution, you
suggest, figured out
some complex systems that give rise to the appropriate emergent
properties to produce
general intelligence.  But evolution did not do this figuring-out in
an analytical way, rather
via its own special sort of directed trial and error.   You suggest
that to create a generally
intelligent system, we should create a software framework that makes
it very easy to
experiment with  different sorts of complex systems, so that we can
then figure out
(via some combination of experiment, analysis, intuition, theory,
etc.) how to create a
complex system that gives rise to the emergent properties associated
with general
intelligence.

I'm sure the above is not exactly how you'd phrase your argument --
and it doesn't
capture all the nuances -- but I was trying to give a compact and approximate
formulation.   If you'd like to give an alternative, equally compact
formulation, that
would be great.

I think the flaw of your argument lies in your definition of
complexity, and that this
would be revealed if you formalized your argument more fully.  I think
you define
complexity as a kind of fundamental irreducibility that the human
brain does not possess,
and that engineered AGI systems need not possess.  I think that real
systems display
complexity which makes it **computationally difficult** to explain
their emergent properties
in terms of their lower-level structures and dynamics, but not as
fundamentally intractable
as you presume.

But because you don't formalize your notion of complexity adequately,
it's not possible
to engage you in rational argumentation regarding the deep flaw at the
center of your
argument.

However, I cannot prove rigorously that the brain is NOT complex in
the overly strong
sense you  allude it is ... and nor can I prove rigorously that a
design like Novamente Cognition
Engine or OpenCog Prime will give rise to the emergent properties
associated with
general intelligence.  So, in this sense, I don't have a rigorous
refutation of your argument,
and nor would I if you rigorously formalized your argument.

However, I think a rigorous formulation of your argument would make it
apparent to
nearly everyone reading it that your definition of complexity is
unreasonably strong.

-- Ben G


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-28 Thread Ed Porter
 on massively parallel systems
efficiently.

If anything, the problem right now is the confusion of possible approaches
to many of the problems.  More cheap hardware will allow more of them to be
tested of systems of the necessary complexity, and the better ones to become
more widely accepted

RICHARD LOOSEMORE
Frankly, looking at recent posts, I think this list is already dead.

ED PORTER
Richard, if the list is so dead of late, how come you have posted to it so
often recently?  





-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 27, 2008 4:30 PM
To: agi@v2.listbox.com
Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE
IN AGI


At a quick glance I would say you could do it cheaper by building it 
yourself rather than buying Dell servers (cf MicroWulf project that was 
discussed before: http://www.clustermonkey.net//content/view/211/33/).

Secondly:  if what you need to get done is spreading activation (which 
implies massive parallelism) you would probably be better off with a 
Celoxica system than COTS servers:  celoxica.com.  Hugo de Garis has a 
good deal of experience with using this hardware:  it is FPGA based, so 
the potential parallelism is huge.

Third:  the problem, in any case, is not the hardware.  AI researchers 
have saying if only we had better hardware, we could really get these 
algorithms to sing, and THEN we will have a real AI! since the f***ing 
1970s, at least.  There is nothing on this earth more stupid than 
watching people repeat the same mistakes over and over again, for 
decades in a row.

Pardon my fury, but the problem is understanding HOW TO DO IT, and HOW 
TO BUILD THE TOOLS TO DO IT, not having expensive hardware.  So long as 
some people on this list repeat this mistake, this list will degenerate 
even further into obsolescence.

Frankly, looking at recent posts, I think this list is already dead.




Richard Loosemore






Ed Porter wrote:
 WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI
 
 On Wednesday, June 25, US East Cost time, I had an interesting phone
 conversation with Dave Hart, where we discussed just how much hardware
could
 you get for the current buck, for the amounts of money AGI research teams
 using OpenCog (THE LUCKY ONES) might have available to them.
 
 After our talk I checked out the cost of current servers at Dell (the
 easiest place I knew of to check out prices.) I found hardware, and
 particularly memory was somewhat cheaper than Dave and I had thought.  But
 it is still sufficiently expensive, that moderately funded projects are
 going to be greatly limited by the processor-memory and inter-processor
 bandwidth as to how much spreading activation and inferencing they will be
 capable of doing.
 
 A RACK MOUNTABLE SERVER WITH 4 QUAD-CORE XEONS, WITH EACH PROCESSOR HAVING
 8MB OF CACHE, AND THE WHOLE SERVER HAVING 128GBYTES OF RAM AND FOUR
300GBYTE
 HARD DRIVES WAS UNDER $30K.  The memory stayed roughly constant in price
per
 GByte going from 32 to 64 to 128 GBytes.  Of course you would probably
have
 to pay a several extra grand for software and warranties.  SO LET US SAY
THE
 PRICE IS $33K PER SERVER.
 
 A 24 port 20Gbit/sec infiniband switch with cables and one 20Gbit/sec
 adapter card for each of 24 servers would be about $52K
 
 SO A TOTAL SYSTEM WITH 24 SERVERS, 96 PROCESSORS, 384 CORES, 768MBYTE OF
L2
 CACHE, 3 TBYTES OF RAM, AND 28.8TBYTES OF DISK, AND THE 24 PORT 20GBIT/SEC
 SWITCH WOULD BE ROUGHLY $850 GRAND.  
 
 That doesn't include air conditioning.  I am guessing each server probably
 draws about 400 watts, so 24 of them would be about 9600 watts--- about
the
 amount of heat of ten hair dryers running in one room, which obviously
would
 require some cooling, but I would not think would be that expensive to
 handle.
 
 With regard to performance, such systems are not even close to human brain
 level but they should allow some interesting proofs of concepts
 
 Performance
 ---
 AI spreading activation often involves a fair amount of non-locality of
 memory.  Unfortunately there is a real penalty for accessing RAM randomly.
 Without overleaving, one article I read recently implied about 50ns was a
 short latency for a memory access.  So we will assume 20M random RAM
access
 (randomRamOpps) per second per channel, and that an average activation
will
 take two, a read and write, so roughly 10M activations/sec per memory
 channel.  
 
 Matt Mahoney has pointed out that spreading activation can be modeled by
 matrix methods that let you access RAM with much higher sequential memory
 accessing rates.  He claimed he could process about a gigabyte of matrix
 data a second.  If one assumes each element in the matrix is 8 bytes, that
 would be the equivalent of doing 125M activation a second, which is
roughly
 12.5 times faster (if just 2 bytes, it would be 50 times fasters, or 500M
 activation/sec).
 
 If one

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-28 Thread wannabe
There was one little line in this post that struck me, and I wanted to  
comment:



Quoting Ed Porter [EMAIL PROTECTED]:


With regard to performance, such systems are not even close to human brain
level but they should allow some interesting proofs of concepts


Mentioning some huge system.  My thought was, wow, that's just sounds  
sad.  But I guess it depends on what you mean by performance.  One  
thing that computers now way exceed brain performance in is  
reliability of the operations.  Sure, it's difficult to say what a  
basic brain operation is  (is a synapse reaction equivalent to a  
multiply accumulate?), but one thing that can be said about them is  
that they aren't very reliable or precise.  They have a sort of a  
range of operation, where they kind of will act in a certain way given  
an input.  It's got to be really hard to get valuable behavior out of  
this kind of a system, so the brain uses massive redundancy.  Now, it  
might well be that in addition to just the reliability, this kind of a  
system gets other value from it, like a nice probabilistic operation  
that has additional value in itself.  Maybe the inherent  
unpredictability is part of what we mean by intelligence.  Personally  
I suspect that to be true.  But this all stands in great contrast to  
how computers naturally work--obeying information processing  
instructions with absolute precision (possibly error-free, depending  
on how you look it).


There is a sort of mismatch between good human brain behavior and good  
computer behavior.  It seems like the AGI project is about making a  
computer act like a good brain.  We can focus on how to get a computer  
to act in ways that are ideal for a brain to act intelligently.  And  
by this I mean something like having some basic operations and systems  
that can be used in all situations.  But I think it might also be good  
to try to think of it in terms of looking for the best ways for a  
computer to be intelligent.  I'm a patchwork AGI kind of guy, and  
while surely there must be some general mechanism, it seems to make  
sense that there could also be many very finely crafted modules.   
Unfortunately, if we are restricting modules to human written modules,  
then that's the basic problem.  A basic function of an AGI should be  
that it can write programs for itself to handle tasks.  Or I guess for  
other systems.  But if it can do that, then these programs don't need  
such huge amounts of computer power.

andi



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-28 Thread Richard Loosemore

Ed Porter wrote:

I do not claim the software architecture for AGI has been totally solved.
But I believe that enough good AGI approaches exist (and I think Novamente
is one) that when powerful hardware available to more people we will be able
to relatively quickly get systems up and running that demonstrate the parts
of the problems we have solved.  And that will provide valuable insights and
test beds for solving the parts of the problem that we have not yet solved.


You are not getting my point.  What you just said was EXACTLY what was 
said in 1970, 1971, 1972, 1973 ..2003, 2004, 2005, 2006, 2007 ..


And every time it was said, the same justification for the claim was 
given:  I just have this belief that it will work.


Plus ca change, plus c'est la meme fubar.






With regard to your statement the problem is understanding HOW TO DO IT
---
WE DO UNDERSTAND HOW TO DO IT --- NOT ALL OF IT --- AND NOT HOW TO MAKE IT
ALL WORK TOGETHER WELL AUTOMATICALLY --- BUT --- GIVEN THE TYPE OF HARDWARE
EXPECTED TO COST LESS THAN $3M IN 6 YEARS --- WE KNOW HOW TO BUILD MUCH OF
IT --- ENOUGH THAT WE COULD PROVIDE EXTREMELY VALUABLE COMPUTERS WITH OUR
CURRENT UNDERSTANDINGS.


You do *not* understand how to do it.  But I have to say that statements 
like your paragraph above are actually very good for my health, because 
their humor content is right up there in the top ten, along with Eddie 
Izzard's Death Star Canteen sketch and Stephen Colbert at the 2006 White 
House Correspondents' Association Dinner.


So long as the general response to the complex systems problem is not 
This could be a serious issue, let's put our heads together to 
investigate it, but My gut feeling is that this is just not going to 
be a problem, or Quit rocking the boat!, you can bet that nobody 
really wants to ask any questions about whether the approaches are 
correct, they just want to be left alone to get on with their 
approaches.  History, I think, will have some interesting things to say 
about all this.


Good luck anyway.



Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-28 Thread Ed Porter
Wannabe,

Your qualification is totally appropriate.  

When I say such systems are not even close to human brain level I mean not
close to human level at the types of things human brains current outperform
computers.  Obviously there are many ways in which just current PC's
outperform human by thousands or millions of times.  But there still are
many tasks at which human minds greatly outperform computers, and that is
where a lot of the focus in AGI is.

By a human level AGI, I mean a computer that can do almost all the things a
human brain does as fast as a human.  But such hardware will probably be
capable of performing many of the things a PC can already do much faster
than a human, many times faster than a PC.  A machine that can do all the
types of things a human does as fast as a human, and that can also do many
tasks millions of times faster than a human --- and that can mix and match,
blend, and interface between these two different types of processes rapidly
will be extremely powerful.  

For example, such a system could scan text at very high speeds (millions of
pages a second), and where it found combinations of words that looked
interesting slow down and read them at a fast skim (10s to 1000s of times
faster than a human), and then read the texts from the skim that seem
interesting at roughly human speed would be able to find and understand
relevant information in large volumes of thousands of times faster than a
human.  And of course, once such texts have been read they would be indexed
and be much more rapidly available for future access when relevant.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Saturday, June 28, 2008 3:36 PM
To: agi@v2.listbox.com
Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE
IN AGI

There was one little line in this post that struck me, and I wanted to  
comment:


Quoting Ed Porter [EMAIL PROTECTED]:

 With regard to performance, such systems are not even close to human brain
 level but they should allow some interesting proofs of concepts

Mentioning some huge system.  My thought was, wow, that's just sounds  
sad.  But I guess it depends on what you mean by performance.  One  
thing that computers now way exceed brain performance in is  
reliability of the operations.  Sure, it's difficult to say what a  
basic brain operation is  (is a synapse reaction equivalent to a  
multiply accumulate?), but one thing that can be said about them is  
that they aren't very reliable or precise.  They have a sort of a  
range of operation, where they kind of will act in a certain way given  
an input.  It's got to be really hard to get valuable behavior out of  
this kind of a system, so the brain uses massive redundancy.  Now, it  
might well be that in addition to just the reliability, this kind of a  
system gets other value from it, like a nice probabilistic operation  
that has additional value in itself.  Maybe the inherent  
unpredictability is part of what we mean by intelligence.  Personally  
I suspect that to be true.  But this all stands in great contrast to  
how computers naturally work--obeying information processing  
instructions with absolute precision (possibly error-free, depending  
on how you look it).

There is a sort of mismatch between good human brain behavior and good  
computer behavior.  It seems like the AGI project is about making a  
computer act like a good brain.  We can focus on how to get a computer  
to act in ways that are ideal for a brain to act intelligently.  And  
by this I mean something like having some basic operations and systems  
that can be used in all situations.  But I think it might also be good  
to try to think of it in terms of looking for the best ways for a  
computer to be intelligent.  I'm a patchwork AGI kind of guy, and  
while surely there must be some general mechanism, it seems to make  
sense that there could also be many very finely crafted modules.   
Unfortunately, if we are restricting modules to human written modules,  
then that's the basic problem.  A basic function of an AGI should be  
that it can write programs for itself to handle tasks.  Or I guess for  
other systems.  But if it can do that, then these programs don't need  
such huge amounts of computer power.
andi



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-28 Thread Ben Goertzel
On Sat, Jun 28, 2008 at 4:13 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Ed Porter wrote:

 I do not claim the software architecture for AGI has been totally solved.
 But I believe that enough good AGI approaches exist (and I think Novamente
 is one) that when powerful hardware available to more people we will be
 able
 to relatively quickly get systems up and running that demonstrate the
 parts
 of the problems we have solved.  And that will provide valuable insights
 and
 test beds for solving the parts of the problem that we have not yet
 solved.

 You are not getting my point.  What you just said was EXACTLY what was said
 in 1970, 1971, 1972, 1973 ..2003, 2004, 2005, 2006, 2007 ..

 And every time it was said, the same justification for the claim was given:
  I just have this belief that it will work.


It is not the case that the reason I believe Novamente/OpenCog can work for AGI
is just a belief

Nor, however, is the reason an argument that can be summarized in an email.

I'm setting out on a 2-week vacation on Monday (June 30 - July 13), on
which I'll
be pretty much without email (in the wilds of Alaska ;-) ... so it's a bad time
for me to get involved in deep discussions

But I hope to release some docs on OpenCog Prime later this summer, which
will disclose a bit more of my reasons for thinking the approach can succeed.

Ed has seen much of this material before, but most others on this list
have not...

There is a broad range of qualities-of-justification, between a mere belief
on the one hand, and a rigorous proof on the other.

-- Ben G


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-28 Thread Brad Paulsen

Richard and Ed,

Insanity is doing the same thing over and over again and expecting different 
results. - Albert Einstein


Prelude to insanity: unintentionally doing the same thing over and over again 
and getting the same results. - Me


Cheers,

Brad

Richard Loosemore wrote:

Ed Porter wrote:

I do not claim the software architecture for AGI has been totally solved.
But I believe that enough good AGI approaches exist (and I think 
Novamente
is one) that when powerful hardware available to more people we will 
be able
to relatively quickly get systems up and running that demonstrate the 
parts
of the problems we have solved.  And that will provide valuable 
insights and
test beds for solving the parts of the problem that we have not yet 
solved.


You are not getting my point.  What you just said was EXACTLY what was 
said in 1970, 1971, 1972, 1973 ..2003, 2004, 2005, 2006, 2007 ..


And every time it was said, the same justification for the claim was 
given:  I just have this belief that it will work.


Plus ca change, plus c'est la meme fubar.






With regard to your statement the problem is understanding HOW TO DO IT
---
WE DO UNDERSTAND HOW TO DO IT --- NOT ALL OF IT --- AND NOT HOW TO 
MAKE IT
ALL WORK TOGETHER WELL AUTOMATICALLY --- BUT --- GIVEN THE TYPE OF 
HARDWARE
EXPECTED TO COST LESS THAN $3M IN 6 YEARS --- WE KNOW HOW TO BUILD 
MUCH OF

IT --- ENOUGH THAT WE COULD PROVIDE EXTREMELY VALUABLE COMPUTERS WITH OUR
CURRENT UNDERSTANDINGS.


You do *not* understand how to do it.  But I have to say that statements 
like your paragraph above are actually very good for my health, because 
their humor content is right up there in the top ten, along with Eddie 
Izzard's Death Star Canteen sketch and Stephen Colbert at the 2006 White 
House Correspondents' Association Dinner.


So long as the general response to the complex systems problem is not 
This could be a serious issue, let's put our heads together to 
investigate it, but My gut feeling is that this is just not going to 
be a problem, or Quit rocking the boat!, you can bet that nobody 
really wants to ask any questions about whether the approaches are 
correct, they just want to be left alone to get on with their 
approaches.  History, I think, will have some interesting things to say 
about all this.


Good luck anyway.



Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-28 Thread Ben Goertzel
Richard,

 So long as the general response to the complex systems problem is not This
 could be a serious issue, let's put our heads together to investigate it,
 but My gut feeling is that this is just not going to be a problem, or
 Quit rocking the boat!, you can bet that nobody really wants to ask any
 questions about whether the approaches are correct, they just want to be
 left alone to get on with their approaches.

Both Ed Porter and myself have given serious thought to the complex systems
problem as you call it, and have discussed it with you at length.  I
also read the
only formal paper you sent me dealing with it (albeit somewhat
indirectly) and also
your various online discourses on the topic.

Ed and I don't agree with you on the topic, but not because of lack of thinking
or attention.

Your argument FOR the existence of a complex systems problem with Novamente
or OpenCog, is not any more rigorous than our argument AGAINST it.

Similarly, I have no rigorous argument that Novamente and OpenCog won't fail
because of the lack of a soul.   I can't prove this formally -- and
even if I did, those who
believe a soul is necessary for AI could always dispute the
mathematical assumptions
of my proof.  And those who do claim a soul is necessary, have no
rigorous arguments
in their favor, except ones based transparently on assumptions I reject...

And so it goes...

Ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-27 Thread Richard Loosemore


At a quick glance I would say you could do it cheaper by building it 
yourself rather than buying Dell servers (cf MicroWulf project that was 
discussed before: http://www.clustermonkey.net//content/view/211/33/).


Secondly:  if what you need to get done is spreading activation (which 
implies massive parallelism) you would probably be better off with a 
Celoxica system than COTS servers:  celoxica.com.  Hugo de Garis has a 
good deal of experience with using this hardware:  it is FPGA based, so 
the potential parallelism is huge.


Third:  the problem, in any case, is not the hardware.  AI researchers 
have saying if only we had better hardware, we could really get these 
algorithms to sing, and THEN we will have a real AI! since the f***ing 
1970s, at least.  There is nothing on this earth more stupid than 
watching people repeat the same mistakes over and over again, for 
decades in a row.


Pardon my fury, but the problem is understanding HOW TO DO IT, and HOW 
TO BUILD THE TOOLS TO DO IT, not having expensive hardware.  So long as 
some people on this list repeat this mistake, this list will degenerate 
even further into obsolescence.


Frankly, looking at recent posts, I think this list is already dead.




Richard Loosemore






Ed Porter wrote:

WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

On Wednesday, June 25, US East Cost time, I had an interesting phone
conversation with Dave Hart, where we discussed just how much hardware could
you get for the current buck, for the amounts of money AGI research teams
using OpenCog (THE LUCKY ONES) might have available to them.

After our talk I checked out the cost of current servers at Dell (the
easiest place I knew of to check out prices.) I found hardware, and
particularly memory was somewhat cheaper than Dave and I had thought.  But
it is still sufficiently expensive, that moderately funded projects are
going to be greatly limited by the processor-memory and inter-processor
bandwidth as to how much spreading activation and inferencing they will be
capable of doing.

A RACK MOUNTABLE SERVER WITH 4 QUAD-CORE XEONS, WITH EACH PROCESSOR HAVING
8MB OF CACHE, AND THE WHOLE SERVER HAVING 128GBYTES OF RAM AND FOUR 300GBYTE
HARD DRIVES WAS UNDER $30K.  The memory stayed roughly constant in price per
GByte going from 32 to 64 to 128 GBytes.  Of course you would probably have
to pay a several extra grand for software and warranties.  SO LET US SAY THE
PRICE IS $33K PER SERVER.

A 24 port 20Gbit/sec infiniband switch with cables and one 20Gbit/sec
adapter card for each of 24 servers would be about $52K

SO A TOTAL SYSTEM WITH 24 SERVERS, 96 PROCESSORS, 384 CORES, 768MBYTE OF L2
CACHE, 3 TBYTES OF RAM, AND 28.8TBYTES OF DISK, AND THE 24 PORT 20GBIT/SEC
SWITCH WOULD BE ROUGHLY $850 GRAND.  


That doesn't include air conditioning.  I am guessing each server probably
draws about 400 watts, so 24 of them would be about 9600 watts--- about the
amount of heat of ten hair dryers running in one room, which obviously would
require some cooling, but I would not think would be that expensive to
handle.

With regard to performance, such systems are not even close to human brain
level but they should allow some interesting proofs of concepts

Performance
---
AI spreading activation often involves a fair amount of non-locality of
memory.  Unfortunately there is a real penalty for accessing RAM randomly.
Without overleaving, one article I read recently implied about 50ns was a
short latency for a memory access.  So we will assume 20M random RAM access
(randomRamOpps) per second per channel, and that an average activation will
take two, a read and write, so roughly 10M activations/sec per memory
channel.  


Matt Mahoney has pointed out that spreading activation can be modeled by
matrix methods that let you access RAM with much higher sequential memory
accessing rates.  He claimed he could process about a gigabyte of matrix
data a second.  If one assumes each element in the matrix is 8 bytes, that
would be the equivalent of doing 125M activation a second, which is roughly
12.5 times faster (if just 2 bytes, it would be 50 times fasters, or 500M
activation/sec).

If one assumes each of 4 core of each of 4 processors could handle a matrix
at 1GByte/sec, and each element in the matrix was just 2 bytes, that would
be 8 G 2Byte matrix activations/sec/server, and 256G matrix
activation/sec/system.  It is not clear how well this could be made to work
with the type of interconnectivity of an AGI.  It is clear their would be
some penalty for sparseness, perhaps a large one.  If one used run-length
encoding in matrix, which is read by rows, then a set of column whose values
could fit in cache could be loaded into cache, and the portions of all the
rows relating to them could be read sequentially.  Once all the portions of
all the row relating to the sub-set of colums had been processed, then the
process could be repeated for another