Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
Sorry about the late reply.

snip some stuff sorted out

2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
 On Tue, Jul 1, 2008 at 2:02 AM, William Pearson [EMAIL PROTECTED] wrote:
 2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:

 If internals are programmed by humans, why do you need automatic
 system to assess them? It would be useful if you needed to construct
 and test some kind of combination/setting automatically, but not if
 you just test manually-programmed systems. How does the assessment
 platform help in improving/accelerating the research?


 Because to be interesting the human specified programs need to be
 autogenous, as in Josh Storr Hall's terminology, which means
 self-building. Capable of altering the stuff they are made of. In this
 case machine code equivalent. So you need the human to assess the
 improvements the system makes, for whatever purpose the human wants
 the system to perform.


 Altering the stuff they are made of is instrumental to achieving the
 goal, and should be performed where necessary, but it doesn't happen,
 for example, with individual brains.

I think it happens at the level of neural structures. I.e. I think
neural structures control the development of other neural structures.

 (I was planning to do the next
 blog post on this theme, maybe tomorrow.) Do you mean to create
 population of altered initial designs and somehow select from them (I
 hope not, it is orthogonal to what modification is for in the first
 place)? Otherwise, why do you still need automated testing? Could you
 present a more detailed use case?


I'll try and give a fuller explanation later on.


 This means he needs to use a bunch more resources to get a singular
 useful system. Also the system might not do what he wants, but I don't
 think he minds about that.

 I'm allowing humans to design everything, just allowing the very low
 level to vary. Is this clearer?

 What do you mean by varying low level, especially in human-designed systems?

 The machine code the program is written in. Or in a java VM, the java 
 bytecode.


 This still didn't make this point clearer. You can't vary the
 semantics of low-level elements from which software is built, and if
 you don't modify the semantics, any other modification is superficial
 and irrelevant. If it's not quite 'software' that you are running, and
 it is able to survive the modification of lower level, using the terms
 like 'machine code' and 'software' is misleading. And in any case,
 it's not clear what this modification of low level achieves. You can't
 extract work from obfuscation and tinkering, the optimization comes
 from the lawful and consistent pressure in the same direction.


Okay let us clear things up. There are two things that need to be
designed, a computer architecture or virtual machine and programs that
form the initial set of programs within the system. Let us call the
internal programs vmprograms to avoid confusion.The vmprograms should
do all the heavy lifting (reasoning, creating new programs), this is
where the lawlful and consistent pressure would come from.

It is at source code of vmprograms that all needs to be changeable.

However the pressure will have to be somewhat experimental to be
powerful, you don't know what bugs a new program will have (if you are
doing a non-tight proof search through the space of programs). So the
point of the VM is to provide a safety net. If an experiment goes
awry, then the VM should allow each program to limit the bugged
vmprograms ability to affect it and eventually have it removed and the
resources applied to it.

Here is a toy scenario where the system needs this ability. *Note it
is not anything that is like a full AI but illustrates a facet of
something a full AI needs IMO*.

Consider a system trying to solve a task, e.g. navigate a maze, that
also has a number of different people out there giving helpful hints
on how to solve the maze. These hints are in the form of patches to
the vmprograms, e.g. changing the representation to 6-dimensional,
giving another patch language that has better patches. So the system
would make copies of the part of it to be patched and then patch it.
Now you could give a patch evaluation module to see which patch works
best, but what would happen if the module that implemented that
vmprogram wanted to be patched? My solution to the problem is to allow
the patch and non-patched version compete in the adhoc economic arena,
and see which one wins.

Does this clear things up?

 Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread Mike Tintner

Terren,

This is going too far. We can reconstruct to a considerable extent how 
humans think about problems - their conscious thoughts. Artists have been 
doing this reasonably well for hundreds of years. Science has so far avoided 
this, just as it avoided studying first the mind, with behaviourism,  then 
consciousness,. The main reason cognitive science and psychology have 
avoided stream-of-thought studies (apart from v. odd scientists like Jerome 
Singer) is that conscious thought about problems is v. different from the 
highly ordered, rational, thinking of programmed computers which cog. sci. 
uses as its basic paradigm. In fact, human thinking is fundamentally 
different - the conscious self has major difficulty concentrating on any 
problem for any length of time -  controlling the mind for more than a 
relatively few seconds, (as religious and humanistic thinkers have been 
telling us for thousands of years). Computers of course have perfect 
concentration forever. But that's because computers haven't had to deal with 
the type of problems that we do - the problematic problems where you don't, 
basically, know the answer, or how to find the answer, before you start.


For this kind of problem - which is actually what differentiates AGI from 
narrow AI - human thinking, creative as opposed to rational, stumbling, 
scatty, and freely associative, is actually IDEAL, for all its 
imperfections.


Yes, even if we extend our model of intelligence to include creative as well 
as rational thinking, it will still be an impoverished model, which may not 
include embodied thinking and perhaps other dimensions. But hey, we'll get 
there bit by bit, (just not, as we both agree, all at once in one five-year 
leap).


Terren: My points about the pitfalls of theorizing about intelligence apply 
to any and all humans who would attempt it - meaning, it's not necessary to 
characterize AI folks in one way or another. There are any number of aspects 
of intelligence we could highlight that pose a challenge to orthodox models 
of intelligence, but the bigger point is that there are fundamental limits 
to the ability of an intelligence to observe itself, in exactly the same way 
that an eye cannot see itself.


Consciousness and intelligence are present in every possible act of 
contemplation, so it is impossible to gain a vantage point of intelligence 
from outside of it. And that's exactly what we pretend to do when we 
conceptualize it within an artificial construct. This is the principle 
conceit of AI, that we can understand intelligence in an objective way, 
and model it well enough to reproduce by design.


Terren

--- On Tue, 7/1/08, Mike Tintner [EMAIL PROTECTED] wrote:


Terren:It's to make the larger point that we may be so
immersed in our own
conceptualizations of intelligence - particularly because
we live in our
models and draw on our own experience and introspection to
elaborate them -
that we may have tunnel vision about the possibilities for
better or
different models. Or, we may take for granted huge swaths
of what makes us
so smart, because it's so familiar, or below the radar
of our conscious
awareness, that it doesn't even occur to us to reflect
on it.

No 2 is more relevant - AI-ers don't seem to introspect
much. It's an irony
that the way AI-ers think when creating a program bears v.
little
resemblance to the way programmed computers think. (Matt
started to broach
this when he talked a while back of computer programming as
an art). But
AI-ers seem to have no interest in the discrepancy - which
again is ironic,
because analysing it would surely help them with their
programming as well
as the small matter of understanding how general
intelligence actually
works.

In fact  - I just looked - there is a longstanding field on
psychology of
programming. But it seems to share the deficiency of
psychology and
cognitive science generally which is : no study of the
stream-of-conscious-thought, especially conscious
problemsolving. The only
AI figure I know who did take some interest here was
Herbert Simon who
helped establish the use of verbal protocols.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread Vladimir Nesov
On Wed, Jul 2, 2008 at 2:48 PM, William Pearson [EMAIL PROTECTED] wrote:

 Okay let us clear things up. There are two things that need to be
 designed, a computer architecture or virtual machine and programs that
 form the initial set of programs within the system. Let us call the
 internal programs vmprograms to avoid confusion.The vmprograms should
 do all the heavy lifting (reasoning, creating new programs), this is
 where the lawlful and consistent pressure would come from.

 It is at source code of vmprograms that all needs to be changeable.

 However the pressure will have to be somewhat experimental to be
 powerful, you don't know what bugs a new program will have (if you are
 doing a non-tight proof search through the space of programs). So the
 point of the VM is to provide a safety net. If an experiment goes
 awry, then the VM should allow each program to limit the bugged
 vmprograms ability to affect it and eventually have it removed and the
 resources applied to it.

 Here is a toy scenario where the system needs this ability. *Note it
 is not anything that is like a full AI but illustrates a facet of
 something a full AI needs IMO*.

 Consider a system trying to solve a task, e.g. navigate a maze, that
 also has a number of different people out there giving helpful hints
 on how to solve the maze. These hints are in the form of patches to
 the vmprograms, e.g. changing the representation to 6-dimensional,
 giving another patch language that has better patches. So the system
 would make copies of the part of it to be patched and then patch it.
 Now you could give a patch evaluation module to see which patch works
 best, but what would happen if the module that implemented that
 vmprogram wanted to be patched? My solution to the problem is to allow
 the patch and non-patched version compete in the adhoc economic arena,
 and see which one wins.


What are the criteria that VM applies to vmprograms? If VM just
shortcircuits the economic pressure of agents to one another, it in
itself doesn't specify the direction of the search. The human economy
works to efficiently satisfy the goals of human beings who already
have their moral complexity. It propagates the decisions that
customers make, and fuels the allocation of resources based on these
decisions. Efficiency of economy is in efficiency of responding to
information about human goals. If your VM just feeds the decisions on
themselves, what stops the economy from focusing on efficiently doing
nothing?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-02 Thread Richard Loosemore

John G. Rose wrote:

[snip]
Building a complex based intelligence much different from the human brain
design but still basically dependant on complexity is not impossible just
formidable. Working with software systems that have designed complexity and
getting predicted emergence and in this case cognition, well that is
something that takes special talent. We have tools now that nature and
evolution didn't have. We understand things through collective knowledge
accumulated over time. It can be more than trial and error. And the existing
trial and error can be narrowed down.


Ah, but now you are stating the Standard Reply, and what you have to 
understand is that the Standard Reply boils down to this:  We are so 
smart that we will figure a way around this limitation, without having 
to do any so crass as just copying the human design.


The problem is that if you apply that logic to well-known cases of 
complex systems, it amounts to nothing more than baseless, stubborn 
optimism in the face of any intractable problem.  It is this baseless 
stubborn optimism that I am trying to bring to everyone's attention.


In all my efforts to get this issue onto people's mental agenda, my goal 
is to make them realize that they would NEVER say such a silly thing 
about the vast majority of complex systems (nobody has any idea how to 
build an analytical theory of the relationship between the patterns that 
emerge in Game Of Life, for example, and that is one of the most trivial 
examples of a complex system that I can think of!).  But whereas most 
mathematicians would refuse to waste any time at all trying to make a 
global-to-local theory for complex systems in which there is really 
vicious self-organisation at work, AI researchers blithely walk in and 
say We reckon we can just use our smarts and figure out some heuristics 
to get around it.


I'm just trying to get people to do a reality check.

Oh, and meanwhile (when I am not firing off occasional broadsides on 
this list) I *am* working on a solution.





Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-07-02 Thread Abram Demski
So yes, I think there are perfectly fine, rather simple
definitions for computing machines that can (it seems
like) perform calculations that turing machines cannot.
It should really be noted that quantum computers fall
into this class.

This is very interesting. Previously, I had heard (but not from a
definitive source) that quantum computers could compute in principle
only what a Turing machine could compute, but could do it much more
efficiently (something like the square root of the effort a Turing
machine would need, at least for some tasks). Can you cite any source
on this?

But I should emphasize that what I am really interested in is
computable approximation of uncomputable things. My stance is that an
AGI should be able to reason about uncomputable concepts in a coherent
manner (like we can), not that it needs to be able to actually compute
them (which we can't).

On Tue, Jul 1, 2008 at 2:35 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
 2008/6/16 Abram Demski [EMAIL PROTECTED]:
 I previously posted here claiming that the human mind (and therefore
 an ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)

 I missed your earlier posts. However, I believe that there
 are models of computation can compute things that turing
 machines cannot, and that this is not arcane, just not widely
 known or studied.  Here is a quick sketch:

 Topological finite automata, or geometric finite automata,
 (of which the quantum finite automata is a special case)
 generalize the notion of non-deterministic finite automata
 by replacing its powerset of states with a general topological
 or geometric space (complex projective space in the quantum
 case). It is important to note that these general spaces are
 in general uncountable (have the cardinality of the continuum).

 It is well known that the languages accepted by quantum
 finite automata are not regular languages, they are bigger
 and more complex in some ways. I am not sure what is
 known about the languages accepted by quantum push-down
 automata, but intuitively these are clearly different (and bigger)
 than the class of context-free languages.

 I believe the concepts of topological finite automata extend
 just fine to a generalization of turing machines, but I also
 believe this is a poorly-explored area of mathematics.
 I beleive such machines can compute things that turing
 machiens can't ..  this should not be a surprise, since,
 after all, these systems have, in general, an uncountably
 infinite number of internal states (cardinality of the
 continuum!), and (as a side effect of the definition),
 perform infinite-precision addition and multiplication
 in finite time.

 So yes, I think there are perfectly fine, rather simple
 definitions for computing machines that can (it seems
 like) perform calculations that turing machines cannot.
 It should really be noted that quantum computers fall
 into this class.

 Considerably more confusing is the relationship of
 such machines (and the languages they accept) to
 lambda calculus, or first-order (or higher-order) logic.
 This is where the rubber hits the road, and even for
 the simplest examples, the systems are poorly
 understood, or not even studied.  So, yeah, I think
 there's plenty of room for the uncomputable in
 some rather simple mathematical models of generalized
 computation.

 --linas


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread Terren Suydam

Mike, 

 This is going too far. We can reconstruct to a considerable
 extent how  humans think about problems - their conscious thoughts.

Why is it going too far?  I agree with you that we can reconstruct thinking, to 
a point. I notice you didn't say we can completely reconstruct how humans 
think about problems. Why not?

We have two primary means for understanding thought, and both are deeply flawed:

1. Introspection. Introspection allows us to analyze our mental life in a 
reflective way. This is possible because we are able to construct mental models 
of our mental models. There are three flaws with introspection. The first, 
least serious flaw is that we only have access to that which is present in our 
conscious awareness. We cannot introspect about unconscious processes, by 
definition.

This is a less serious objection because it's possible in practice to become 
conscious of phenomena there were previously unconscious, by developing our 
meta-mental-models. The question here becomes, is there any reason in principle 
that we cannot become conscious of *all* mental processes?

The second flaw is that, because introspection relies on the meta-models we 
need to make sense of our internal, mental life, the possibility is always 
present that our meta-models themselves are flawed. Worse, we have no way of 
knowing if they are wrong, because we often unconsciously, unwittingly deny 
evidence contrary to our conception of our own cognition, particularly when it 
runs counter to a positive account of our self-image.

Harvard's Project Implicit experiment 
(https://implicit.harvard.edu/implicit/) is a great way to demonstrate how we 
remain ignorant of deep, unconscious biases. Another example is how little we 
understand the contribution of emotion to our decision-making. Joseph Ledoux 
and others have shown fairly convincingly that emotion is a crucial part of 
human cognition, but most of us (particularly us men) deny the influence of 
emotion on our decision making.

The final flaw is the most serious. It says there is a fundamental limit to 
what introspection has access to. This is the an eye cannot see itself 
objection. But I can see my eyes in the mirror, says the devil's advocate. Of 
course, a mirror lets us observe a reflected version of our eye, and this is 
what introspection is. But we cannot see inside our own eye, directly - it's a 
fundamental limitation of any observational apparatus. Likewise, we cannot see 
inside the very act of model-simulation that enables introspection. 
Introspection relies on meta-models, or models about models, which are 
activated/simulated *after the fact*. We might observe ourselves in the act of 
introspection, but that is nothing but a meta-meta-model. Each introspectional 
act by necessity is one step (at least) removed from the direct, in-the-present 
flow of cognition. This means that we can never observe the cognitive machinery 
that enables the act of introspection itself.

And if you don't believe that introspection relies on cognitive machinery 
(maybe you're a dualist, but then why are you on an AI list? :-), ask yourself 
why we can't introspect about ourselves before a certain point in our young 
lives. It relies on a sufficiently sophisticated toolset that requires a 
certain amount of development before it is even possible.

2. Theory. Our theories of cognition are another path to understanding, and 
much of theory is directly or indirectly informed by introspection. When 
introspection fails (as in language acquisition), we rely completely on theory. 
The flaw with theory should be obvious. We have no direct way of testing 
theories of cognition, since we don't understand the connection between the 
mental and the physical. At best, we can use clever indirect means for 
generating evidence, and we usually have to accept the limits of reliability of 
subjective reports. 

Terren

--- On Wed, 7/2/08, Mike Tintner [EMAIL PROTECTED] wrote:
 Terren,
 
 This is going too far. We can reconstruct to a considerable
 extent how 
 humans think about problems - their conscious thoughts.
 Artists have been 
 doing this reasonably well for hundreds of years. Science
 has so far avoided 
 this, just as it avoided studying first the mind, with
 behaviourism,  then 
 consciousness,. The main reason cognitive science and
 psychology have 
 avoided stream-of-thought studies (apart from v. odd
 scientists like Jerome 
 Singer) is that conscious thought about problems is v.
 different from the 
 highly ordered, rational, thinking of programmed computers
 which cog. sci. 
 uses as its basic paradigm. In fact, human thinking is
 fundamentally 
 different - the conscious self has major difficulty
 concentrating on any 
 problem for any length of time -  controlling the mind for
 more than a 
 relatively few seconds, (as religious and humanistic
 thinkers have been 
 telling us for thousands of years). Computers of course
 have perfect 
 concentration forever. 

RE: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-02 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 
 Ah, but now you are stating the Standard Reply, and what you have to
 understand is that the Standard Reply boils down to this:  We are so
 smart that we will figure a way around this limitation, without having
 to do any so crass as just copying the human design.
 


Well another reply could be - OK everyone AGI is impossible so you can go
home now. That would work real well. Into the future more and more
bodies(and brains) will be thrown at this no matter what. Satellite
technologies make it all more attractive and worthwhile and make it appear
that progress is being made, and it is. If everything else is figured out
and engineered and the last thing is a CSP that is still progress EVEN if
some of the components need to be totally redesigned. Remember even basic
stuff like say a primitive distributed graph software library is still in
early stages of being built for AGI amongst many other things. There are
protocols, standards, all kinds of stuff needed yet not there, especially
experience.

 The problem is that if you apply that logic to well-known cases of
 complex systems, it amounts to nothing more than baseless, stubborn
 optimism in the face of any intractable problem.  It is this baseless
 stubborn optimism that I am trying to bring to everyone's attention.
 

Sure. Yet how many resources are thrown at predicting the weather and it is
usually still WRONG!! The utility of accurate prediction is so high even
useless attempts have value due to spin-off technologies and incidentals and
there is psychological value..


 In all my efforts to get this issue onto people's mental agenda, my goal
 is to make them realize that they would NEVER say such a silly thing
 about the vast majority of complex systems (nobody has any idea how to
 build an analytical theory of the relationship between the patterns that
 emerge in Game Of Life, for example, and that is one of the most trivial
 examples of a complex system that I can think of!).  But whereas most
 mathematicians would refuse to waste any time at all trying to make a
 global-to-local theory for complex systems in which there is really
 vicious self-organisation at work, AI researchers blithely walk in and
 say We reckon we can just use our smarts and figure out some heuristics
 to get around it.
 

That's what makes engineers engineers. If it is not conquerable it is
workaroundable. Still though I don't know how much proof that there is a
CSP. The CB example you gave reminds me of a dynamical system. Proving the
CSP exists may turn heads more.


 I'm just trying to get people to do a reality check.
 
 Oh, and meanwhile (when I am not firing off occasional broadsides on
 this list) I *am* working on a solution.
 


Yes, and your solution attempt is :) Please feel free to present ideas to
the list for constructive criticism :)

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread Mike Tintner

Terren,

Obviously, as I indicated, I'm not suggesting that we can easily construct a 
total model of human cognition. But it ain't that hard to reconstruct 
reasonable and highly informative, if imperfect,  models of how humans 
consciously think about problems. As I said, artists have been doing a 
reasonable job for centuries. Shakespeare, who really started the inner 
monologue, was arguably the first scientist of consciousness. The kind of 
standard argument you give below - the eye can't look at itself - is 
actually nonsense. Your conscious, inner thoughts are not that different 
from your public, recordable dialogue. (Any decent transcript of thought, 
BTW, will give a v. good indication of the emotions involved).


We're not v. far apart here - we agree about the many dimensions of 
cognition, most of which are probably NOT directly accessible to the 
conscious mind. I'm just insisting on the massive importance of studying 
conscious thought. It was, as Crick said, ridiculous for science not to 
study consciousness - (it had a lot of rubbish arguments for not doing that, 
then) - it is equally ridiculous and in fact scientifically obscene not to 
study conscious thought. The consequences both for humans generally and AGI 
are enormous.



Terren: Mike,



This is going too far. We can reconstruct to a considerable
extent how  humans think about problems - their conscious thoughts.


Why is it going too far?  I agree with you that we can reconstruct 
thinking, to a point. I notice you didn't say we can completely 
reconstruct how humans think about problems. Why not?


We have two primary means for understanding thought, and both are deeply 
flawed:


1. Introspection. Introspection allows us to analyze our mental life in a 
reflective way. This is possible because we are able to construct mental 
models of our mental models. There are three flaws with introspection. The 
first, least serious flaw is that we only have access to that which is 
present in our conscious awareness. We cannot introspect about unconscious 
processes, by definition.


This is a less serious objection because it's possible in practice to 
become conscious of phenomena there were previously unconscious, by 
developing our meta-mental-models. The question here becomes, is there any 
reason in principle that we cannot become conscious of *all* mental 
processes?


The second flaw is that, because introspection relies on the meta-models 
we need to make sense of our internal, mental life, the possibility is 
always present that our meta-models themselves are flawed. Worse, we have 
no way of knowing if they are wrong, because we often unconsciously, 
unwittingly deny evidence contrary to our conception of our own cognition, 
particularly when it runs counter to a positive account of our self-image.


Harvard's Project Implicit experiment 
(https://implicit.harvard.edu/implicit/) is a great way to demonstrate how 
we remain ignorant of deep, unconscious biases. Another example is how 
little we understand the contribution of emotion to our decision-making. 
Joseph Ledoux and others have shown fairly convincingly that emotion is a 
crucial part of human cognition, but most of us (particularly us men) deny 
the influence of emotion on our decision making.


The final flaw is the most serious. It says there is a fundamental limit 
to what introspection has access to. This is the an eye cannot see 
itself objection. But I can see my eyes in the mirror, says the devil's 
advocate. Of course, a mirror lets us observe a reflected version of our 
eye, and this is what introspection is. But we cannot see inside our own 
eye, directly - it's a fundamental limitation of any observational 
apparatus. Likewise, we cannot see inside the very act of model-simulation 
that enables introspection. Introspection relies on meta-models, or 
models about models, which are activated/simulated *after the fact*. We 
might observe ourselves in the act of introspection, but that is nothing 
but a meta-meta-model. Each introspectional act by necessity is one step 
(at least) removed from the direct, in-the-present flow of cognition. This 
means that we can never observe the cognitive machinery that enables the 
act of introspection itself.


And if you don't believe that introspection relies on cognitive machinery 
(maybe you're a dualist, but then why are you on an AI list? :-), ask 
yourself why we can't introspect about ourselves before a certain point in 
our young lives. It relies on a sufficiently sophisticated toolset that 
requires a certain amount of development before it is even possible.


2. Theory. Our theories of cognition are another path to understanding, 
and much of theory is directly or indirectly informed by introspection. 
When introspection fails (as in language acquisition), we rely completely 
on theory. The flaw with theory should be obvious. We have no direct way 
of testing theories of cognition, since we don't understand the 

Re: [agi] the uncomputable

2008-07-02 Thread Hector Zenil
The standard model of quantum computation as defined by Feynman and
Deutsch is Turing computable (based on the concept of qubits). As
proven by Deutsch they compute the same set of functions than Turing
machines but faster (if they are feasible).

Non-standard models of quantum computation are not widely accepted,
and even when they could hypercompute many doubt that we could take
any from continuum entangling to perform computations. Non-standard
quantum computers have not yet being well defined (and that is one of
the many issues of hypercomputation: each time one comes up with a
standard model of hypercomputation there is always another not
equivalent model of hypercomputation that computes a different set of
functions, i.e. there is no convergence in models unlike what happened
when digital computation was characterized).

Hypercomputational models basically pretend to take advantage from
either infinite time or infinite space (including models such as
infinite resources, Zeno machines or the Omega-rule, real computation,
etc.), from the continuum. Depending of the density of that space/time
continuum one can think of several models taking advantage at several
levels of the arithmetical hierarchy. But even if there is infinite
space or time another issue is how to verify a hypercomputation. One
would need another hypercomputer to verify the first and then trust in
one.

Whether you think hypercomputation, the following paper is a most read
for those interested on the topic. Martin Davis' articulates several
criticisms:
The myth of hypercomputation, in: C. Teuscher (Ed.), Alan Turing: Life
and Legacy of a Great Thinker (2004)

Serious work on analogous computation can be found in papers from
Felix Costa et al.:
http://fgc.math.ist.utl.pt/jfc.htm

My master's thesis was on the subject so if you are interested in
getting an electronic copy just let me know. It is in French though.




On Wed, Jul 2, 2008 at 11:15 AM, Abram Demski [EMAIL PROTECTED] wrote:
 So yes, I think there are perfectly fine, rather simple
 definitions for computing machines that can (it seems
 like) perform calculations that turing machines cannot.
 It should really be noted that quantum computers fall
 into this class.

 This is very interesting. Previously, I had heard (but not from a
 definitive source) that quantum computers could compute in principle
 only what a Turing machine could compute, but could do it much more
 efficiently (something like the square root of the effort a Turing
 machine would need, at least for some tasks). Can you cite any source
 on this?

 But I should emphasize that what I am really interested in is
 computable approximation of uncomputable things. My stance is that an
 AGI should be able to reason about uncomputable concepts in a coherent
 manner (like we can), not that it needs to be able to actually compute
 them (which we can't).

 On Tue, Jul 1, 2008 at 2:35 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
 2008/6/16 Abram Demski [EMAIL PROTECTED]:
 I previously posted here claiming that the human mind (and therefore
 an ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)

 I missed your earlier posts. However, I believe that there
 are models of computation can compute things that turing
 machines cannot, and that this is not arcane, just not widely
 known or studied.  Here is a quick sketch:

 Topological finite automata, or geometric finite automata,
 (of which the quantum finite automata is a special case)
 generalize the notion of non-deterministic finite automata
 by replacing its powerset of states with a general topological
 or geometric space (complex projective space in the quantum
 case). It is important to note that these general spaces are
 in general uncountable (have the cardinality of the continuum).

 It is well known that the languages accepted by quantum
 finite automata are not regular languages, they are bigger
 and more complex in some ways. I am not sure what is
 known about the languages accepted by quantum push-down
 automata, but intuitively these are clearly different (and bigger)
 than the class of context-free languages.

 I believe the concepts of topological finite automata extend
 just fine to a generalization of turing machines, but I also
 believe this is a poorly-explored area of mathematics.
 I beleive such machines can compute things that turing
 machiens can't ..  this should not be a surprise, since,
 after all, these systems have, in general, an uncountably
 infinite number of internal states (cardinality of the
 continuum!), and (as a side effect of the definition),
 perform infinite-precision addition and multiplication
 in finite time.

 So yes, I think there are perfectly fine, rather simple
 definitions for computing machines that can (it seems
 like) perform calculations that turing machines cannot.
 It should really be noted that quantum computers fall
 into this class.

 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
2008/7/2 Terren Suydam [EMAIL PROTECTED]:

 Mike,

 This is going too far. We can reconstruct to a considerable
 extent how  humans think about problems - their conscious thoughts.

 Why is it going too far?  I agree with you that we can reconstruct thinking, 
 to a point. I notice you didn't say we can completely reconstruct how humans 
 think about problems. Why not?

 We have two primary means for understanding thought, and both are deeply 
 flawed:

 1. Introspection. Introspection allows us to analyze our mental life in a 
 reflective way. This is possible because we are able to construct mental 
 models of our mental models. There are three flaws with introspection. The 
 first, least serious flaw is that we only have access to that which is 
 present in our conscious awareness. We cannot introspect about unconscious 
 processes, by definition.

 This is a less serious objection because it's possible in practice to become 
 conscious of phenomena there were previously unconscious, by developing our 
 meta-mental-models. The question here becomes, is there any reason in 
 principle that we cannot become conscious of *all* mental processes?

 The second flaw is that, because introspection relies on the meta-models we 
 need to make sense of our internal, mental life, the possibility is always 
 present that our meta-models themselves are flawed. Worse, we have no way of 
 knowing if they are wrong, because we often unconsciously, unwittingly deny 
 evidence contrary to our conception of our own cognition, particularly when 
 it runs counter to a positive account of our self-image.

 Harvard's Project Implicit experiment 
 (https://implicit.harvard.edu/implicit/) is a great way to demonstrate how we 
 remain ignorant of deep, unconscious biases. Another example is how little we 
 understand the contribution of emotion to our decision-making. Joseph Ledoux 
 and others have shown fairly convincingly that emotion is a crucial part of 
 human cognition, but most of us (particularly us men) deny the influence of 
 emotion on our decision making.

 The final flaw is the most serious. It says there is a fundamental limit to 
 what introspection has access to. This is the an eye cannot see itself 
 objection. But I can see my eyes in the mirror, says the devil's advocate. Of 
 course, a mirror lets us observe a reflected version of our eye, and this is 
 what introspection is. But we cannot see inside our own eye, directly - it's 
 a fundamental limitation of any observational apparatus. Likewise, we cannot 
 see inside the very act of model-simulation that enables introspection. 
 Introspection relies on meta-models, or models about models, which are 
 activated/simulated *after the fact*. We might observe ourselves in the act 
 of introspection, but that is nothing but a meta-meta-model. Each 
 introspectional act by necessity is one step (at least) removed from the 
 direct, in-the-present flow of cognition. This means that we can never 
 observe the cognitive machinery that enables the act of introspection itself.

 And if you don't believe that introspection relies on cognitive machinery 
 (maybe you're a dualist, but then why are you on an AI list? :-), ask 
 yourself why we can't introspect about ourselves before a certain point in 
 our young lives. It relies on a sufficiently sophisticated toolset that 
 requires a certain amount of development before it is even possible.

 2. Theory. Our theories of cognition are another path to understanding, and 
 much of theory is directly or indirectly informed by introspection. When 
 introspection fails (as in language acquisition), we rely completely on 
 theory. The flaw with theory should be obvious. We have no direct way of 
 testing theories of cognition, since we don't understand the connection 
 between the mental and the physical. At best, we can use clever indirect 
 means for generating evidence, and we usually have to accept the limits of 
 reliability of subjective reports.


My plan is go for 3) Usefulness. Cognition is useful from an
evolutionary point of view, if we try to create systems that are
useful in the same situations (social, building world models), then we
might one day stumble upon cognition.

To expand on usefulness in social contexts, you have to ask yourself
what the point of language is, why is it useful in an evolutionary
setting. One thing the point of language is not, is fooling humans
that you are human, which makes me annoyed at all the chatbots that
get coverage as AI.

I'll write more on this later.

This by the way is why I don't self-organise purpose. I am pretty sure
a specified purpose (not the same thing as a goal, at all) is needed
for an intelligence.

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread Terren Suydam

Mike,

That's a rather weak reply. I'm open to the possibility that my ideas are 
incorrect or need improvement, but calling what I said nonsense without further 
justification is just hand waving.

Unless you mean this as your justification:
Your conscious, inner thoughts are not that different from your public, 
recordable dialogue.

How this amounts to an objection to my points about introspection is beyond 
me... care to elaborate?

Terren

--- On Wed, 7/2/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Terren,
 
 Obviously, as I indicated, I'm not suggesting that we
 can easily construct a 
 total model of human cognition. But it ain't that hard
 to reconstruct 
 reasonable and highly informative, if imperfect,  models of
 how humans 
 consciously think about problems. As I said, artists have
 been doing a 
 reasonable job for centuries. Shakespeare, who really
 started the inner 
 monologue, was arguably the first scientist of
 consciousness. The kind of 
 standard argument you give below - the eye can't look
 at itself - is 
 actually nonsense. Your conscious, inner thoughts are not
 that different 
 from your public, recordable dialogue. (Any decent
 transcript of thought, 
 BTW, will give a v. good indication of the emotions
 involved).
 
 We're not v. far apart here - we agree about the many
 dimensions of 
 cognition, most of which are probably NOT directly
 accessible to the 
 conscious mind. I'm just insisting on the massive
 importance of studying 
 conscious thought. It was, as Crick said,
 ridiculous for science not to 
 study consciousness - (it had a lot of rubbish arguments
 for not doing that, 
 then) - it is equally ridiculous and in fact scientifically
 obscene not to 
 study conscious thought. The consequences both for humans
 generally and AGI 
 are enormous.
 
 
 Terren: Mike,
 
  This is going too far. We can reconstruct to a
 considerable
  extent how  humans think about problems - their
 conscious thoughts.
 
  Why is it going too far?  I agree with you that we can
 reconstruct 
  thinking, to a point. I notice you didn't say
 we can completely 
  reconstruct how humans think about problems. Why
 not?
 
  We have two primary means for understanding thought,
 and both are deeply 
  flawed:
 
  1. Introspection. Introspection allows us to analyze
 our mental life in a 
  reflective way. This is possible because we are able
 to construct mental 
  models of our mental models. There are three flaws
 with introspection. The 
  first, least serious flaw is that we only have access
 to that which is 
  present in our conscious awareness. We cannot
 introspect about unconscious 
  processes, by definition.
 
  This is a less serious objection because it's
 possible in practice to 
  become conscious of phenomena there were previously
 unconscious, by 
  developing our meta-mental-models. The question here
 becomes, is there any 
  reason in principle that we cannot become conscious of
 *all* mental 
  processes?
 
  The second flaw is that, because introspection relies
 on the meta-models 
  we need to make sense of our internal, mental life,
 the possibility is 
  always present that our meta-models themselves are
 flawed. Worse, we have 
  no way of knowing if they are wrong, because we often
 unconsciously, 
  unwittingly deny evidence contrary to our conception
 of our own cognition, 
  particularly when it runs counter to a positive
 account of our self-image.
 
  Harvard's Project Implicit experiment 
  (https://implicit.harvard.edu/implicit/) is a great
 way to demonstrate how 
  we remain ignorant of deep, unconscious biases.
 Another example is how 
  little we understand the contribution of emotion to
 our decision-making. 
  Joseph Ledoux and others have shown fairly
 convincingly that emotion is a 
  crucial part of human cognition, but most of us
 (particularly us men) deny 
  the influence of emotion on our decision making.
 
  The final flaw is the most serious. It says there is a
 fundamental limit 
  to what introspection has access to. This is the
 an eye cannot see 
  itself objection. But I can see my eyes in the
 mirror, says the devil's 
  advocate. Of course, a mirror lets us observe a
 reflected version of our 
  eye, and this is what introspection is. But we cannot
 see inside our own 
  eye, directly - it's a fundamental limitation of
 any observational 
  apparatus. Likewise, we cannot see inside the very act
 of model-simulation 
  that enables introspection. Introspection relies on
 meta-models, or 
  models about models, which are
 activated/simulated *after the fact*. We 
  might observe ourselves in the act of introspection,
 but that is nothing 
  but a meta-meta-model. Each introspectional act by
 necessity is one step 
  (at least) removed from the direct, in-the-present
 flow of cognition. This 
  means that we can never observe the cognitive
 machinery that enables the 
  act of introspection itself.
 
  And if you don't believe that introspection 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread Terren Suydam

Will,

 My plan is go for 3) Usefulness. Cognition is useful from
 an
 evolutionary point of view, if we try to create systems
 that are
 useful in the same situations (social, building world
 models), then we
 might one day stumble upon cognition.

Sure, that's a valid approach for creating something we might call intelligent. 
My diatribe there was about human thought (the only kind we know of), not 
cognition in general.
 
 This by the way is why I don't self-organise purpose. I
 am pretty sure
 a specified purpose (not the same thing as a goal, at all)
 is needed
 for an intelligence.
 
   Will

OK, then who or what specified the purpose of the first life forms? It's that 
intuition of yours that leads directly to Intelligent Design. As an aside, I 
love the irony that AI researchers who try to design intelligence are 
unwittingly giving ammunition to Intelligent Design arguments. 

Terren


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
 On Wed, Jul 2, 2008 at 2:48 PM, William Pearson [EMAIL PROTECTED] wrote:

 Okay let us clear things up. There are two things that need to be
 designed, a computer architecture or virtual machine and programs that
 form the initial set of programs within the system. Let us call the
 internal programs vmprograms to avoid confusion.The vmprograms should
 do all the heavy lifting (reasoning, creating new programs), this is
 where the lawlful and consistent pressure would come from.

 It is at source code of vmprograms that all needs to be changeable.

 However the pressure will have to be somewhat experimental to be
 powerful, you don't know what bugs a new program will have (if you are
 doing a non-tight proof search through the space of programs). So the
 point of the VM is to provide a safety net. If an experiment goes
 awry, then the VM should allow each program to limit the bugged
 vmprograms ability to affect it and eventually have it removed and the
 resources applied to it.

 Here is a toy scenario where the system needs this ability. *Note it
 is not anything that is like a full AI but illustrates a facet of
 something a full AI needs IMO*.

 Consider a system trying to solve a task, e.g. navigate a maze, that
 also has a number of different people out there giving helpful hints
 on how to solve the maze. These hints are in the form of patches to
 the vmprograms, e.g. changing the representation to 6-dimensional,
 giving another patch language that has better patches. So the system
 would make copies of the part of it to be patched and then patch it.
 Now you could give a patch evaluation module to see which patch works
 best, but what would happen if the module that implemented that
 vmprogram wanted to be patched? My solution to the problem is to allow
 the patch and non-patched version compete in the adhoc economic arena,
 and see which one wins.


 What are the criteria that VM applies to vmprograms? If VM just
 shortcircuits the economic pressure of agents to one another, it in
 itself doesn't specify the direction of the search. The human economy
 works to efficiently satisfy the goals of human beings who already
 have their moral complexity. It propagates the decisions that
 customers make, and fuels the allocation of resources based on these
 decisions. Efficiency of economy is in efficiency of responding to
 information about human goals. If your VM just feeds the decisions on
 themselves, what stops the economy from focusing on efficiently doing
 nothing?

They would get less credit from the human supervisor. Let me expand on
what I meant about the economic competition. Let us say vmprogram A
makes a copy of itself, called A', with some purposeful tweaks, trying
to make itself more efficient.

A' has some bugs such that the human notices something wrong with the
system, she gives less credit on average each time A' is helping out
rather than A.

Now A and A' both have to bid for the chance to help program B which
is closer to the outputting (due to the programming of B), B pays a
proportion of the credit it gets back. Now the credit B gets will be
lower when A' is helping, than when A is helping. So A' will get less
in general than A. There are a few scenarios, ordered from quickest
acting to slowest.

1 ) B keeps records of who helps him and sees that A' is not helping
him as well as the average, so no longer lets A' bid. A' resources get
used when it can't keep up bidding for them.
2) A' continues bidding a lot, to outbid A. However the average amount
A' gets is less than it gets back from B. A' bankrupts itself and
other programs use its resources.
3) A' doesn't manage to outbid A' after a fair few trials, so gets the
same fate as it does in scenario 1)

If you start with a bunch of stupid vmprograms, you won't get
anywhere. It can just go to nothingness, you do have to design them
fairly well, just in such a way that that design can change later.

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-07-02 Thread Abram Demski
Hector Zenil said:
and that is one of the many issues of hypercomputation: each time one
comes up with a standard model of hypercomputation there is always
another not equivalent model of hypercomputation that computes a
different set of functions, i.e. there is no convergence in models
unlike what happened when digital computation was characterized

This is not entirely true. Turing's oracle machines turn out to
correspond to infinite-time machines, and both correspond to the
arithmetical hierarchy.

On Wed, Jul 2, 2008 at 12:38 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 The standard model of quantum computation as defined by Feynman and
 Deutsch is Turing computable (based on the concept of qubits). As
 proven by Deutsch they compute the same set of functions than Turing
 machines but faster (if they are feasible).

 Non-standard models of quantum computation are not widely accepted,
 and even when they could hypercompute many doubt that we could take
 any from continuum entangling to perform computations. Non-standard
 quantum computers have not yet being well defined (and that is one of
 the many issues of hypercomputation: each time one comes up with a
 standard model of hypercomputation there is always another not
 equivalent model of hypercomputation that computes a different set of
 functions, i.e. there is no convergence in models unlike what happened
 when digital computation was characterized).

 Hypercomputational models basically pretend to take advantage from
 either infinite time or infinite space (including models such as
 infinite resources, Zeno machines or the Omega-rule, real computation,
 etc.), from the continuum. Depending of the density of that space/time
 continuum one can think of several models taking advantage at several
 levels of the arithmetical hierarchy. But even if there is infinite
 space or time another issue is how to verify a hypercomputation. One
 would need another hypercomputer to verify the first and then trust in
 one.

 Whether you think hypercomputation, the following paper is a most read
 for those interested on the topic. Martin Davis' articulates several
 criticisms:
 The myth of hypercomputation, in: C. Teuscher (Ed.), Alan Turing: Life
 and Legacy of a Great Thinker (2004)

 Serious work on analogous computation can be found in papers from
 Felix Costa et al.:
 http://fgc.math.ist.utl.pt/jfc.htm

 My master's thesis was on the subject so if you are interested in
 getting an electronic copy just let me know. It is in French though.




 On Wed, Jul 2, 2008 at 11:15 AM, Abram Demski [EMAIL PROTECTED] wrote:
 So yes, I think there are perfectly fine, rather simple
 definitions for computing machines that can (it seems
 like) perform calculations that turing machines cannot.
 It should really be noted that quantum computers fall
 into this class.

 This is very interesting. Previously, I had heard (but not from a
 definitive source) that quantum computers could compute in principle
 only what a Turing machine could compute, but could do it much more
 efficiently (something like the square root of the effort a Turing
 machine would need, at least for some tasks). Can you cite any source
 on this?

 But I should emphasize that what I am really interested in is
 computable approximation of uncomputable things. My stance is that an
 AGI should be able to reason about uncomputable concepts in a coherent
 manner (like we can), not that it needs to be able to actually compute
 them (which we can't).

 On Tue, Jul 1, 2008 at 2:35 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
 2008/6/16 Abram Demski [EMAIL PROTECTED]:
 I previously posted here claiming that the human mind (and therefore
 an ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)

 I missed your earlier posts. However, I believe that there
 are models of computation can compute things that turing
 machines cannot, and that this is not arcane, just not widely
 known or studied.  Here is a quick sketch:

 Topological finite automata, or geometric finite automata,
 (of which the quantum finite automata is a special case)
 generalize the notion of non-deterministic finite automata
 by replacing its powerset of states with a general topological
 or geometric space (complex projective space in the quantum
 case). It is important to note that these general spaces are
 in general uncountable (have the cardinality of the continuum).

 It is well known that the languages accepted by quantum
 finite automata are not regular languages, they are bigger
 and more complex in some ways. I am not sure what is
 known about the languages accepted by quantum push-down
 automata, but intuitively these are clearly different (and bigger)
 than the class of context-free languages.

 I believe the concepts of topological finite automata extend
 just fine to a generalization of turing machines, but I also
 believe this is a poorly-explored area of 

Re: [agi] the uncomputable

2008-07-02 Thread Hector Zenil
On Wed, Jul 2, 2008 at 1:30 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Hector Zenil said:
 and that is one of the many issues of hypercomputation: each time one
 comes up with a standard model of hypercomputation there is always
 another not equivalent model of hypercomputation that computes a
 different set of functions, i.e. there is no convergence in models
 unlike what happened when digital computation was characterized

 This is not entirely true. Turing's oracle machines turn out to
 correspond to infinite-time machines, and both correspond to the
 arithmetical hierarchy.

At each level of the arithmetical hierarchy there is a universal
oracle machine (a hypercomputer), so there is no standard model of
hypercomputation unless you make strong assumptions, unlike digital
computation. There are even hiperarithmetical machines and as stated
by Post's problem, intermediate non-comparable degrees at each level
of the arithmetical and hiperarithmetical (that's why the Turing
universe does not build a total order).


 On Wed, Jul 2, 2008 at 12:38 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 The standard model of quantum computation as defined by Feynman and
 Deutsch is Turing computable (based on the concept of qubits). As
 proven by Deutsch they compute the same set of functions than Turing
 machines but faster (if they are feasible).

 Non-standard models of quantum computation are not widely accepted,
 and even when they could hypercompute many doubt that we could take
 any from continuum entangling to perform computations. Non-standard
 quantum computers have not yet being well defined (and that is one of
 the many issues of hypercomputation: each time one comes up with a
 standard model of hypercomputation there is always another not
 equivalent model of hypercomputation that computes a different set of
 functions, i.e. there is no convergence in models unlike what happened
 when digital computation was characterized).

 Hypercomputational models basically pretend to take advantage from
 either infinite time or infinite space (including models such as
 infinite resources, Zeno machines or the Omega-rule, real computation,
 etc.), from the continuum. Depending of the density of that space/time
 continuum one can think of several models taking advantage at several
 levels of the arithmetical hierarchy. But even if there is infinite
 space or time another issue is how to verify a hypercomputation. One
 would need another hypercomputer to verify the first and then trust in
 one.

 Whether you think hypercomputation, the following paper is a most read
 for those interested on the topic. Martin Davis' articulates several
 criticisms:
 The myth of hypercomputation, in: C. Teuscher (Ed.), Alan Turing: Life
 and Legacy of a Great Thinker (2004)

 Serious work on analogous computation can be found in papers from
 Felix Costa et al.:
 http://fgc.math.ist.utl.pt/jfc.htm

 My master's thesis was on the subject so if you are interested in
 getting an electronic copy just let me know. It is in French though.




 On Wed, Jul 2, 2008 at 11:15 AM, Abram Demski [EMAIL PROTECTED] wrote:
 So yes, I think there are perfectly fine, rather simple
 definitions for computing machines that can (it seems
 like) perform calculations that turing machines cannot.
 It should really be noted that quantum computers fall
 into this class.

 This is very interesting. Previously, I had heard (but not from a
 definitive source) that quantum computers could compute in principle
 only what a Turing machine could compute, but could do it much more
 efficiently (something like the square root of the effort a Turing
 machine would need, at least for some tasks). Can you cite any source
 on this?

 But I should emphasize that what I am really interested in is
 computable approximation of uncomputable things. My stance is that an
 AGI should be able to reason about uncomputable concepts in a coherent
 manner (like we can), not that it needs to be able to actually compute
 them (which we can't).

 On Tue, Jul 1, 2008 at 2:35 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
 2008/6/16 Abram Demski [EMAIL PROTECTED]:
 I previously posted here claiming that the human mind (and therefore
 an ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)

 I missed your earlier posts. However, I believe that there
 are models of computation can compute things that turing
 machines cannot, and that this is not arcane, just not widely
 known or studied.  Here is a quick sketch:

 Topological finite automata, or geometric finite automata,
 (of which the quantum finite automata is a special case)
 generalize the notion of non-deterministic finite automata
 by replacing its powerset of states with a general topological
 or geometric space (complex projective space in the quantum
 case). It is important to note that these general spaces are
 in general uncountable (have the cardinality of the 

Re: [agi] the uncomputable

2008-07-02 Thread Abram Demski
Yes, I was not claiming that there was just one type of hypercomputer,
merely that some initially very different-looking types do turn out to
be equivalent.

You seem quite knowledgeable about the subject. Can you recommend any
books or papers?

On Wed, Jul 2, 2008 at 1:42 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Wed, Jul 2, 2008 at 1:30 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Hector Zenil said:
 and that is one of the many issues of hypercomputation: each time one
 comes up with a standard model of hypercomputation there is always
 another not equivalent model of hypercomputation that computes a
 different set of functions, i.e. there is no convergence in models
 unlike what happened when digital computation was characterized

 This is not entirely true. Turing's oracle machines turn out to
 correspond to infinite-time machines, and both correspond to the
 arithmetical hierarchy.

 At each level of the arithmetical hierarchy there is a universal
 oracle machine (a hypercomputer), so there is no standard model of
 hypercomputation unless you make strong assumptions, unlike digital
 computation. There are even hiperarithmetical machines and as stated
 by Post's problem, intermediate non-comparable degrees at each level
 of the arithmetical and hiperarithmetical (that's why the Turing
 universe does not build a total order).


 On Wed, Jul 2, 2008 at 12:38 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 The standard model of quantum computation as defined by Feynman and
 Deutsch is Turing computable (based on the concept of qubits). As
 proven by Deutsch they compute the same set of functions than Turing
 machines but faster (if they are feasible).

 Non-standard models of quantum computation are not widely accepted,
 and even when they could hypercompute many doubt that we could take
 any from continuum entangling to perform computations. Non-standard
 quantum computers have not yet being well defined (and that is one of
 the many issues of hypercomputation: each time one comes up with a
 standard model of hypercomputation there is always another not
 equivalent model of hypercomputation that computes a different set of
 functions, i.e. there is no convergence in models unlike what happened
 when digital computation was characterized).

 Hypercomputational models basically pretend to take advantage from
 either infinite time or infinite space (including models such as
 infinite resources, Zeno machines or the Omega-rule, real computation,
 etc.), from the continuum. Depending of the density of that space/time
 continuum one can think of several models taking advantage at several
 levels of the arithmetical hierarchy. But even if there is infinite
 space or time another issue is how to verify a hypercomputation. One
 would need another hypercomputer to verify the first and then trust in
 one.

 Whether you think hypercomputation, the following paper is a most read
 for those interested on the topic. Martin Davis' articulates several
 criticisms:
 The myth of hypercomputation, in: C. Teuscher (Ed.), Alan Turing: Life
 and Legacy of a Great Thinker (2004)

 Serious work on analogous computation can be found in papers from
 Felix Costa et al.:
 http://fgc.math.ist.utl.pt/jfc.htm

 My master's thesis was on the subject so if you are interested in
 getting an electronic copy just let me know. It is in French though.




 On Wed, Jul 2, 2008 at 11:15 AM, Abram Demski [EMAIL PROTECTED] wrote:
 So yes, I think there are perfectly fine, rather simple
 definitions for computing machines that can (it seems
 like) perform calculations that turing machines cannot.
 It should really be noted that quantum computers fall
 into this class.

 This is very interesting. Previously, I had heard (but not from a
 definitive source) that quantum computers could compute in principle
 only what a Turing machine could compute, but could do it much more
 efficiently (something like the square root of the effort a Turing
 machine would need, at least for some tasks). Can you cite any source
 on this?

 But I should emphasize that what I am really interested in is
 computable approximation of uncomputable things. My stance is that an
 AGI should be able to reason about uncomputable concepts in a coherent
 manner (like we can), not that it needs to be able to actually compute
 them (which we can't).

 On Tue, Jul 1, 2008 at 2:35 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
 2008/6/16 Abram Demski [EMAIL PROTECTED]:
 I previously posted here claiming that the human mind (and therefore
 an ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)

 I missed your earlier posts. However, I believe that there
 are models of computation can compute things that turing
 machines cannot, and that this is not arcane, just not widely
 known or studied.  Here is a quick sketch:

 Topological finite automata, or geometric finite automata,
 (of which the quantum finite automata 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread Abram Demski
How do you assign credit to programs that are good at generating good
children? Particularly, could a program specialize in this, so that it
doesn't do anything useful directly but always through making highly
useful children?

On Wed, Jul 2, 2008 at 1:09 PM, William Pearson [EMAIL PROTECTED] wrote:
 2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
 On Wed, Jul 2, 2008 at 2:48 PM, William Pearson [EMAIL PROTECTED] wrote:

 Okay let us clear things up. There are two things that need to be
 designed, a computer architecture or virtual machine and programs that
 form the initial set of programs within the system. Let us call the
 internal programs vmprograms to avoid confusion.The vmprograms should
 do all the heavy lifting (reasoning, creating new programs), this is
 where the lawlful and consistent pressure would come from.

 It is at source code of vmprograms that all needs to be changeable.

 However the pressure will have to be somewhat experimental to be
 powerful, you don't know what bugs a new program will have (if you are
 doing a non-tight proof search through the space of programs). So the
 point of the VM is to provide a safety net. If an experiment goes
 awry, then the VM should allow each program to limit the bugged
 vmprograms ability to affect it and eventually have it removed and the
 resources applied to it.

 Here is a toy scenario where the system needs this ability. *Note it
 is not anything that is like a full AI but illustrates a facet of
 something a full AI needs IMO*.

 Consider a system trying to solve a task, e.g. navigate a maze, that
 also has a number of different people out there giving helpful hints
 on how to solve the maze. These hints are in the form of patches to
 the vmprograms, e.g. changing the representation to 6-dimensional,
 giving another patch language that has better patches. So the system
 would make copies of the part of it to be patched and then patch it.
 Now you could give a patch evaluation module to see which patch works
 best, but what would happen if the module that implemented that
 vmprogram wanted to be patched? My solution to the problem is to allow
 the patch and non-patched version compete in the adhoc economic arena,
 and see which one wins.


 What are the criteria that VM applies to vmprograms? If VM just
 shortcircuits the economic pressure of agents to one another, it in
 itself doesn't specify the direction of the search. The human economy
 works to efficiently satisfy the goals of human beings who already
 have their moral complexity. It propagates the decisions that
 customers make, and fuels the allocation of resources based on these
 decisions. Efficiency of economy is in efficiency of responding to
 information about human goals. If your VM just feeds the decisions on
 themselves, what stops the economy from focusing on efficiently doing
 nothing?

 They would get less credit from the human supervisor. Let me expand on
 what I meant about the economic competition. Let us say vmprogram A
 makes a copy of itself, called A', with some purposeful tweaks, trying
 to make itself more efficient.

 A' has some bugs such that the human notices something wrong with the
 system, she gives less credit on average each time A' is helping out
 rather than A.

 Now A and A' both have to bid for the chance to help program B which
 is closer to the outputting (due to the programming of B), B pays a
 proportion of the credit it gets back. Now the credit B gets will be
 lower when A' is helping, than when A is helping. So A' will get less
 in general than A. There are a few scenarios, ordered from quickest
 acting to slowest.

 1 ) B keeps records of who helps him and sees that A' is not helping
 him as well as the average, so no longer lets A' bid. A' resources get
 used when it can't keep up bidding for them.
 2) A' continues bidding a lot, to outbid A. However the average amount
 A' gets is less than it gets back from B. A' bankrupts itself and
 other programs use its resources.
 3) A' doesn't manage to outbid A' after a fair few trials, so gets the
 same fate as it does in scenario 1)

 If you start with a bunch of stupid vmprograms, you won't get
 anywhere. It can just go to nothingness, you do have to design them
 fairly well, just in such a way that that design can change later.

  Will


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
2008/7/2 Abram Demski [EMAIL PROTECTED]:
 How do you assign credit to programs that are good at generating good
 children?

I never directly assign credit, apart from the first stage. The rest
of the credit assignment is handled by the vmprograms, er,
programming.


 Particularly, could a program specialize in this, so that it
 doesn't do anything useful directly but always through making highly
 useful children?

As the parent controls the code of its offspring, it could embed code
in its offspring to pass a small portion of the credit they get back
to it. They would have to be careful how much to skim off so the
offspring could still thrive.

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread William Pearson
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
 On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote:
 They would get less credit from the human supervisor. Let me expand on
 what I meant about the economic competition. Let us say vmprogram A
 makes a copy of itself, called A', with some purposeful tweaks, trying
 to make itself more efficient.

 So, this process performs optimization, A has a goal that it tries to
 express in form of A'. What is the problem with the algorithm that A
 uses? If this algorithm is stupid (in a technical sense), A' is worse
 than A and we can detect that. But this means that in fact, A' doesn't
 do its job and all the search pressure comes from program B that ranks
 the performance of A or A'. This
 generate-blindly-or-even-stupidly-and-check is a very inefficient
 algorithm. If, on the other hand, A happens to be a good program, then
 A' has a good change of being better than A, and anyway A has some
 understanding of what 'better' means, then what is the role of B? B
 adds almost no additional pressure, almost everything is done by A.

 How do you distribute the optimization pressure between generating
 programs (A) and checking programs (B)? Why do you need to do that at
 all, what is the benefit of generating and checking separately,
 compared to reliably generating from the same point (A alone)? If
 generation is not reliable enough, it probably won't be useful as
 optimization pressure anyway.


The point of A and A' is that A', if better, may one day completely
replace A. What is very good? Is 1 in 100 chances of making a mistake
when generating its successor very good? If you want A' to be able to
replace A, that is only 100 generations before you have made a bad
mistake, and then where do you go? You have a bugged program and
nothing to act as a watchdog.

Also if A' is better than time A at time t, there is no guarantee that
it will stay that way. Changes in the environment might favour one
optimisation over another. If they both do things well, but different
things then both A and A' might survive in different niches.

I would also be interested in why you think we have programmers and
system testers in the real world.

Also worth noting is most optimisation will be done inside the
vmprograms, this process is only for very fundamental code changes,
e.g. changing representations, biases, ways of creating offspring.
Things that cannot be tested easily any other way. I'm quite happy for
it to be slow, because this process is not where the majority of
quickness of the system will rest. But this process is needed for
intelligence else you will be stuck with certain ways of doing things
when they are not useful.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-07-02 Thread Hector Zenil
On Wed, Jul 2, 2008 at 3:39 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Yes, I was not claiming that there was just one type of hypercomputer,
 merely that some initially very different-looking types do turn out to
 be equivalent.

 You seem quite knowledgeable about the subject. Can you recommend any
 books or papers?

Sure. Are you interested on hypercomputation or arguments against
hypercomputation? For the latter, I already gave the reference to
Martin Davis' paper on 'The Myth of Hypercomputation'.

For serious work on hypercomputation I would recommend people doing
real and analogue computation. The groups of Cris Moore and Felix
Costa, et al. E.g.

Cris Moore, Recursion Theory on the Reals and Continuous-time
Computation. Theoretical Computer Science 162 (1996) 23-44.

Bruno Loff, Jerzy Mycka and Felix Costa, The new promise of analog
computation, invited paper, in S. Barry Cooper, Benedikt Löwe, and
Andrea Sorbi (eds.), Third Conference on Computability in Europe,
CiE2007, Siena, Italy, June 18--23, 2007, Computation and Logic in the
Real World, Lecture Notes in Computer Science 4497: 189--195,
Springer, 2007.

Bruno Loff and Felix Costa, Five views of hypercomputation,
International Journal of Unconventional Computing, Special Issue on
Hypercomputation, to appear.

Bruno Loff, Jerzy Mycka, and Felix Costa, Computability on reals,
infinite limits and differential equations, Applied Mathematics and
Computation, 191(2):353–371, Elsevier, 2007.



 On Wed, Jul 2, 2008 at 1:42 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Wed, Jul 2, 2008 at 1:30 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Hector Zenil said:
 and that is one of the many issues of hypercomputation: each time one
 comes up with a standard model of hypercomputation there is always
 another not equivalent model of hypercomputation that computes a
 different set of functions, i.e. there is no convergence in models
 unlike what happened when digital computation was characterized

 This is not entirely true. Turing's oracle machines turn out to
 correspond to infinite-time machines, and both correspond to the
 arithmetical hierarchy.

 At each level of the arithmetical hierarchy there is a universal
 oracle machine (a hypercomputer), so there is no standard model of
 hypercomputation unless you make strong assumptions, unlike digital
 computation. There are even hiperarithmetical machines and as stated
 by Post's problem, intermediate non-comparable degrees at each level
 of the arithmetical and hiperarithmetical (that's why the Turing
 universe does not build a total order).


 On Wed, Jul 2, 2008 at 12:38 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 The standard model of quantum computation as defined by Feynman and
 Deutsch is Turing computable (based on the concept of qubits). As
 proven by Deutsch they compute the same set of functions than Turing
 machines but faster (if they are feasible).

 Non-standard models of quantum computation are not widely accepted,
 and even when they could hypercompute many doubt that we could take
 any from continuum entangling to perform computations. Non-standard
 quantum computers have not yet being well defined (and that is one of
 the many issues of hypercomputation: each time one comes up with a
 standard model of hypercomputation there is always another not
 equivalent model of hypercomputation that computes a different set of
 functions, i.e. there is no convergence in models unlike what happened
 when digital computation was characterized).

 Hypercomputational models basically pretend to take advantage from
 either infinite time or infinite space (including models such as
 infinite resources, Zeno machines or the Omega-rule, real computation,
 etc.), from the continuum. Depending of the density of that space/time
 continuum one can think of several models taking advantage at several
 levels of the arithmetical hierarchy. But even if there is infinite
 space or time another issue is how to verify a hypercomputation. One
 would need another hypercomputer to verify the first and then trust in
 one.

 Whether you think hypercomputation, the following paper is a most read
 for those interested on the topic. Martin Davis' articulates several
 criticisms:
 The myth of hypercomputation, in: C. Teuscher (Ed.), Alan Turing: Life
 and Legacy of a Great Thinker (2004)

 Serious work on analogous computation can be found in papers from
 Felix Costa et al.:
 http://fgc.math.ist.utl.pt/jfc.htm

 My master's thesis was on the subject so if you are interested in
 getting an electronic copy just let me know. It is in French though.




 On Wed, Jul 2, 2008 at 11:15 AM, Abram Demski [EMAIL PROTECTED] wrote:
 So yes, I think there are perfectly fine, rather simple
 definitions for computing machines that can (it seems
 like) perform calculations that turing machines cannot.
 It should really be noted that quantum computers fall
 into this class.

 This is very interesting. Previously, I had heard (but not from a
 

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-02 Thread Vladimir Nesov
On Thu, Jul 3, 2008 at 12:59 AM, William Pearson [EMAIL PROTECTED] wrote:
 2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
 On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote:
 They would get less credit from the human supervisor. Let me expand on
 what I meant about the economic competition. Let us say vmprogram A
 makes a copy of itself, called A', with some purposeful tweaks, trying
 to make itself more efficient.

 So, this process performs optimization, A has a goal that it tries to
 express in form of A'. What is the problem with the algorithm that A
 uses? If this algorithm is stupid (in a technical sense), A' is worse
 than A and we can detect that. But this means that in fact, A' doesn't
 do its job and all the search pressure comes from program B that ranks
 the performance of A or A'. This
 generate-blindly-or-even-stupidly-and-check is a very inefficient
 algorithm. If, on the other hand, A happens to be a good program, then
 A' has a good change of being better than A, and anyway A has some
 understanding of what 'better' means, then what is the role of B? B
 adds almost no additional pressure, almost everything is done by A.

 How do you distribute the optimization pressure between generating
 programs (A) and checking programs (B)? Why do you need to do that at
 all, what is the benefit of generating and checking separately,
 compared to reliably generating from the same point (A alone)? If
 generation is not reliable enough, it probably won't be useful as
 optimization pressure anyway.


 The point of A and A' is that A', if better, may one day completely
 replace A. What is very good? Is 1 in 100 chances of making a mistake
 when generating its successor very good? If you want A' to be able to
 replace A, that is only 100 generations before you have made a bad
 mistake, and then where do you go? You have a bugged program and
 nothing to act as a watchdog.

 Also if A' is better than time A at time t, there is no guarantee that
 it will stay that way. Changes in the environment might favour one
 optimisation over another. If they both do things well, but different
 things then both A and A' might survive in different niches.


I suggest you read ( http://sl4.org/wiki/KnowabilityOfFAI )
If your program is a faulty optimizer that can't pump the reliability
out of its optimization, you are doomed. I assume you argue that you
don't want to include B in A, because a descendant of A may start to
fail unexpectedly. But if you reliably copy B inside each of A's
descendants, this particular problem won't appear. The main question
is: what is the difference between just trying to build a
self-improving program A and doing so inside your testing environment.
If there is no difference, you add nothing by your framework. If there
is, it would be good to find out what it is.


 I would also be interested in why you think we have programmers and
 system testers in the real world.


Testing that doesn't even depend on program's internal structure and
only checks its output (as in your economy setup) isn't nearly good
enough. Testing that you're referring to in this post (activity
performed by humans, based on specific implementation and
understanding of high-level specification that says what algorithm
should do) has very little to do with testing that you propose in the
framework (fixed program B). Anyway, you should answer on that
question yourself: what is the essence of useful activity that is
performed by software testing and that you capture in your framework.
Arguing that there must be some such essence and that it must transfer
to your setting isn't reliable.


 Also worth noting is most optimisation will be done inside the
 vmprograms, this process is only for very fundamental code changes,
 e.g. changing representations, biases, ways of creating offspring.
 Things that cannot be tested easily any other way. I'm quite happy for
 it to be slow, because this process is not where the majority of
 quickness of the system will rest. But this process is needed for
 intelligence else you will be stuck with certain ways of doing things
 when they are not useful.


Being stuck in development is a problem of search process, it can as
well be a problem of process A that should be resolved from within A.


-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-02 Thread Ed Porter
WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

 

Here is an important practical, conceptual problem I am having trouble with.


 

In an article entitled Are Cortical Models Really Bound by the 'Binding
Problem'?  Tomaso Poggio's group at MIT takes the position that there is no
need for special mechanisms to deal with the famous binding problem --- at
least in certain contexts, such as 150 msec feed forward visual object
recognition.  This article implies that a properly designed hierarchy of
patterns that has both compositional and max-pooling layers (I call them
gen/comp hierarchies) automatically handles the problem of what
sub-elements are connected with which others, preventing the need for
techniques like synchrony to handle this problem.

 

Poggio's group has achieved impressive results without the need for special
mechanisms to deal with binding in this type of visual recognition, as is
indicated by the two papers below by Serre (the later of which summarizes
much of what is in the first, which is an excellent, detailed PhD thesis.)  

 

The two  works by Geoffrey Hinton cited below are descriptions of Hinton's
hierarchical feed-forward neural net recognition system (which, when run
backwards, generates patterns similar to those it has been trained on).
These two works by Hinton show impressive results in handwritten digit
recognition without any explicit mechanism for binding.  In particular,
watch the portion of the Hinton YouTube video starting at 21:35 - 26:39
where Hinton shows his system alternating between recognizing a pattern and
then generating a similar pattern stochastically from the higher level
activations that have resulted from the previous recognition.  See how
amazingly well his system seems to capture the many varied forms in which
the various parts and sub-shapes of numerical handwritten digits are
related.

 

So my question is this: HOW BROADLY DOES THE IMPLICATION THAT THE BINDING
PROBLEM CAN BE AUTOMATICALLY HANDLED BY A GEN/COMP HIERARCHY OR A
HINTON-LIKE HIERARCHY APPLY TO THE MANY TYPES OF PROBLEMS A BRAIN LEVEL
ARTIFICIAL GENERAL INTELLIGENCE WOULD BE EXPECTED TO HANDLE?  In particular
HOW APPLICABLE IS IT TO SEMANTIC PATTERN RECOGNITION AND GENERATION --- WITH
ITS COMPLEX AND HIGHLY VARIED RELATIONS --- SUCH AS IS COMMONLY INVOLVED IN
HUMAN LEVEL NATURAL LANGUAGE UNDERSTANDING AND GENERATION?

 

The paper Are Cortical Models Really Bound by the 'Binding Problem'?,
suggests in the first full paragraph on its second page that gen/comp
hierarchies avoids the binding problem by 

 

coding an object through a set of intermediate features made up of local
arrangements of simpler features [that] sufficiently constrain the
representation to uniquely code complex objects without retaining global
positional information.

 

For example, in the context of speech recognition,

 

...rather than using individual letters to code words, letter pairs or
higher-order combinations of letters can be used-i.e., although the word
tomaso might be confused with the word somato if both were coded by the
sets of letters they are made up of, this ambiguity is resolved if both are
represented through letter pairs.

 

The issue then becomes, WHAT SUB-SETS OF THE TYPES OF PROBLEMS THE HUMAN
BRAIN HAS TO PERFORM CAN BE PERFORMED IN A MANNER THAT AVOIDS THE BINDING
PROBLEM JUST BY USING A GEN/COMP HIERARCHY WITH SUCH A SET OF SIMPLER
FEATURES [THAT] SUFFICIENTLY CONSTRAIN THE REPRESENTATION TO UNIQUELY CODE
THE TYPE OF PATTERNS SUCH TASKS REQUIRE? 

 

There is substantial evidence that the brain does require synchrony for some
of its tasks --- as has been indicated by the work of people like Wolf
Singer --- suggesting that binding may well be a problem that cannot be
handled alone by the specificity of the brain's gen/comp hierarchies for all
mental tasks.

 

The table at the top of page 75 of Serre's impressive PhD thesis suggests
that his system --- which performs very quick feedforwad object recognition
roughly as well as a human --- has an input of 160 x 160 pixels, and
requires 23 million pattern models.  Such a large number of patterns helps
provide the simpler features [that] sufficiently constrains the
representation to uniquely code complex objects without retaining global
positional information.

 

But, it should be noted --- as is recognized in Serre's paper --- that the
very rapid 150 msec feed forward recognition described in that paper is far
from all of human vision.  Such rapid recognition --- although surprisingly
accurate given how fast it is --- is normally supplemented by more top down
vision processes to confirm its best guesses.  For example, if a human is
shown a photograph of a face, his eyes will normally saccade over it, with
multiple fixation points, often on key features such as eyes, nose, corners
of mouth, points on the outline of the face, all indicating the recognition
of the face is normally much more than one rapid feed forward