Colin,

 

Here are my comments re  the following parts of your below post:

 

=======Colin said======>

I merely point out that there are fundamental limits as to how computer
science (CS) can inform/validate basic/physical science - (in an AGI
context, brain science). Take the Baars/Franklin "IDA" project.. It predicts
nothing neuroscience can poke a stick at.

 

===ED's reply===>

Different AGI models can have different degrees of correspondence to, and
different explanatory relevance to, what is believed to take place in the
brain.  For example the Thomas Serre's PhD thesis "Learning a Dictionary of
Shape-Components in Visual Cortex:  Comparison with Neurons, Humans and
Machines," at from
<http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf
>
http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf
, is a computer simulation which is rather similar to my concept of how a
Novamente-like AGI could perform certain tasks in visual perception, and yet
it is designed to model the human visual system to a considerable degree.
It shows that a certain model of how Serre and Poggio think a certain aspect
of the human brain works, does in fact work surprisingly well when simulated
in a computer.

 

A surprisingly large number of brain science papers are based on computer
simulations, many of which are substantially simplified models, but they do
given neuroscientists a way to poke a stick at various theories they might
have for how the brain operates at various levels of organization.  Some of
these papers are directly relevant to AGI.  And some AGI papers are directly
relevant to providing answers to certain brain science questions. 

 

=======Colin said======>

I agree with your :

"At the other end of things, physicists are increasingly viewing physical
reality as a computation, and thus the science of computation (and
communication which is a part of it), such as information theory, have begun
to play an increasingly important role in the most basic of all sciences."


===ED's reply===>

We are largely on the same page here

 

=======Colin said======>

I disagree with:

"But the brain is not part of an eternal verity.  It is the result of the
engineering of evolution. "

Unless I've missed something ... The natural evolutionary 'engineering' that
has been going on has not been the creation of a MODEL (aboutness) of things
- the 'engineering' has evolved the construction of the actual things. The
two are not the same. The brain is indeed 'part of an eternal verity' - it
is made of natural components operating in a fashion we attempt to model as
'laws of nature',,,

 

===ED's reply===>

If you define engineering as a process that involves designing something in
the abstract --- i.e., in your "a MODEL (aboutness of things)" --- before
physically building it, you could claim evolution is not engineering.  

 

But if you define engineering as the designing of things (by a process that
has intelligence what ever method) to solve a set of problems or
constraints, evolution does perform engineering, and the brain was formed by
such engineering.

 

How can you claim the human brain is an eternal verity, since it is only
believed that it has existing in anything close to its current form in the
last 30 to 100 thousand years, and there is no guarantee how much longer it
will continue to exists.  Compared to much of what the natural sciences
study, its existence appears quite fleeting.

 

=======Colin said======>

Anyway, for these reasons, folks who use computer models to study human
brains/consciousness will encounter some difficulty justifying, to the basic
physical sciences, claims made as to the equivalence of the model and
reality. That difficulty is fundamental and cannot be 'believed away'. 

 

===ED's reply===>

If you attend brain science lectures and read brain science literature, you
will find that computer modeling is playing an ever increasing role in brain
science --- so this basic difficulty that you describe largely does not
exist.

 

=======Colin said======>

The intelligence originates in the brain. AGI and brain science must be
literally joined at the hip or the AGI enterprise is arguably scientifically
impoverished wishful thinking. 

===ED's reply===>

I don't know what you mean by "joined at the hip," but I think it is being
overly anthropomorphic to think an artificial mind has to slavishly model a
human brain to have great power and worth.  

 

But I do think it would probably have to accomplish some of the same general
functions, such as automatic pattern learning, credit assignment, attention
control, etc.

 

Ed Porter

 

-----Original Message-----
From: Colin Hales [mailto:c.ha...@pgrad.unimelb.edu.au] 
Sent: Monday, December 22, 2008 11:36 PM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a
machine that can learn from experience

 

Ed,
I wasn't trying to justify or promote a 'divide'. The two worlds must be
better off in collaboration, surely? I merely point out that there are
fundamental limits as to how computer science (CS) can inform/validate
basic/physical science - (in an AGI context, brain science). Take the
Baars/Franklin "IDA" project. Baars invents 'Global Workspace' = a metaphor
of apparent brain operation. Franklin writes one. Afterwards, you're
standing next to to it, wondering as to its performance. What part of its
behaviour has any direct bearing on how a brain works? It predicts nothing
neuroscience can poke a stick at. All you can say is that the computer is
manipulating abstractions according to a model of brain material. At best
you get to be quite right and prove nothing. If the beastie also
underperforms then you have seeds for doubt that also prove nothing.

CS as 'science' has always had this problem. AGI merely inherits its
implications in a particular context/speciality. There's nothing bad or good
- merely justified limits as to how CS and AGI may interact via brain
science. 
----------------
I agree with your :

"At the other end of things, physicists are increasingly viewing physical
reality as a computation, and thus the science of computation (and
communication which is a part of it), such as information theory, have begun
to play an increasingly important role in the most basic of all sciences."

I would advocate physical reality (all of it) as literally computation in
the sense of information processing. Hold a pencil up in front of your face
and take a look at it... realise that the universe is 'computing a pencil'.
Take a look at the computer in front of you: the universe is 'computing a
computer'. The universe is literally computing YOU, too. The computation is
not 'about' a pencil, a computer, a human. The computation IS those things.
In exactly this same sense I want the universe to 'compute' an AGI
(inorganic general intelligence). To me, then, this is not manipulating
abstractions ('aboutnesses') - which is the sense meant by CS generally and
what actually happens in reality in CS. 

So despite some agreement as to words  - it is in the details we are likely
to differ. The information processing in the natural world is not that which
is going on in a model of it. As Edelman said(1) "A theory to account for a
hurricane is not a hurricane". In exactly this way a
computational-algorithmic process "about" intelligence cannot a-priori be
claimed to be the intelligence of that which was modelled. Or - put yet
another way: That {THING behaves 'abstract- RULE-ly'} does not entail that
{anything manipulated according to abstract-RULE will become THING}. The
only perfect algorithmic (100% complete information content) description of
a thing is the actual thing, which includes all 'information' at all
hierarchical descriptive levels, simultaneously.
--------------------
I disagree with:

"But the brain is not part of an eternal verity.  It is the result of the
engineering of evolution. "

Unless I've missed something ... The natural evolutionary 'engineering' that
has been going on has not been the creation of a MODEL (aboutness) of things
- the 'engineering' has evolved the construction of the actual things. The
two are not the same. The brain is indeed 'part of an eternal verity' - it
is made of natural components operating in a fashion we attempt to model as
'laws of nature'. Those models, abstracted and shoehorned into a computer -
are not the same as the original. To believe that they are is one of those
Occam's Razor violations I pointed out before my xmas shopping spree (see
previous-1 post). 
-----------------------

Anyway, for these reasons, folks who use computer models to study human
brains/consciousness will encounter some difficulty justifying, to the basic
physical sciences, claims made as to the equivalence of the model and
reality. That difficulty is fundamental and cannot be 'believed away'. At
the same time it's not a show-stopper; merely something to be aware of as we
go about our duties. This will remain an issue - the only real, certain,
known example of a general intelligence is the human. The intelligence
originates in the brain. AGI and brain science must be literally joined at
the hip or the AGI enterprise is arguably scientifically impoverished
wishful thinking. Which is pretty much what Ben said...although as usual I
have used too many damned words!

I expect we'll just have to agree to disagree... but there you have it :-)

colin hales
(1) Edelman, G. (2003). Naturalizing consciousness: A theoretical framework.
Proc Natl Acad Sci U S A, 100(9), 5520-24.


Ed Porter wrote: 

Colin,

 

>From a quick read, the gist of what your are saying seems to be that AGI is
just "engineering", i.e., the study of what man can make and the properties
thereof, whereas "science" relates to the eternal verities of reality.

 

But the brain is not part of an eternal verity.  It is the result of the
engineering of evolution.  

 

At the other end of things, physicists are increasingly viewing physical
reality as a computation, and thus the science of computation (and
communication which is a part of it), such as information theory, have begun
to play an increasingly important role in the most basic of all sciences.

 

And to the extent that the study of the human mind is a "science", then the
study of the types of computation that are done in the mind is part of that
science, and AGI is the study of many of the same functions.

 

So your post might explain the reason for a current cultural divide, but it
does not really provide a justification for it.  In addition, if you attend
events at either MIT's brain study center or its AI center, you will find
many of the people who are there are from the other of these two centers,
and that there is a considerable degree of cross-fertilization there that I
have heard people at such event describe the benefits of.

 

Ed Porter

 

 

-----Original Message-----
From: Colin Hales [mailto:c.ha...@pgrad.unimelb.edu.au] 
Sent: Monday, December 22, 2008 6:19 PM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a
machine that can learn from experience

 

Ben Goertzel wrote: 

 

On Mon, Dec 22, 2008 at 11:05 AM, Ed Porter <ewpor...@msn.com> wrote:

Ben,

 

Thanks for the reply.

 

It is a shame the brain science people aren't more interested in AGI.  It
seems to me there is a lot of potential for cross-fertilization.



I don't think many of these folks have a principled or deep-seated
**aversion** to AGI work or anything like that -- it's just that they're
busy people and need to prioritize, like all working scientists

There's a more fundamental reason: Software engineering is not 'science' in
the sense understood in the basic physical sciences. Science works to
acquire models of empirically provable critical dependencies (apparent
causal necessities). Software engineering never delivers this. The result of
the work, however interesting and powerful, is a model that is, at best,
merely a correlate of some a-priori 'designed' behaviour. Testing to your
own specification is a normal behaviour in computer science. This is not the
testing done in the basic physical science - they 'test' (empirically
examine) whatever is naturally there - which is, by definition, a-priori
unknown. 

No matter how interesting it may be, software tells us nothing about the
actual causal dependencies. The computer's physical hardware (semiconductor
charge manipulation), configured as per the software, is the actual and
ultimate causal necessitator of all the natural behaviour of hot rocks
inside your computer. Software is MANY:1 redundantly/degenerately related to
the physical (natural world) outcomes. The brilliantly useful
'hardware-independence' achieved by software engineering and essentially
analogue electrical machines behaving 'as-if' they were digital - so
powerful and elegant - actually places the status of the software activities
outside the realm of any claims as causal.

This is the fundamental problem that the  basic physical sciences have with
computer 'science'. It's not, in a formal sense a 'science'. That doesn't
mean CS is bad or irrelevant - it just means that it's value as a revealer
of the properties of the natural world must be accepted with appropriate
caution. 

I've spent 10's of thousands of hours testing software that drove all manner
of physical world equipment - some of it the size of a 10 storey building. I
was testing to my own/others specification. Throughout all of it I knew I
was not doing science in the sense that scientists know it to be. The mantra
is "correlation is not causation" and it's beaten into scientist pups from
an early age. Software is a correlate only - it 'causes' nothing. In
critical argument revolving around claims in respect of software as
causality  - it would be defeated in review every time. A scientist,
standing there with an algorithm/model of a natural world behaviour, knows
that the model does not cause the behaviour. However, the scientist's model
represents a route to predictive efficacy in respect of a unique natural
phenomenon. Computer software does not predict the causal origination of the
natural world behaviours driven by it. 10 compilers could produce 10
different causalities on the same computer. 10 different computers running
the same software would produce 10 different lots of causality.

That's my take on why the basic physical sciences may be under-motivated to
use AGI as a route to the outcomes demanded of their field of interest =
'Laws/regularities of Nature'. It may be that computer 'science' generally
needs to train people better in their understanding of science. As an
engineer with a foot in both camps it's not so hard for me to see this. 

Randalf Beer called software "tautologous" as a law of nature... I think it
was here:
Beer, R. D. (1995). A Dynamical-Systems Perspective on Agent Environment
Interaction. Artificial Intelligence, 72(1-2), 173-215.
I have a .PDF if anyone's interested...it's 3.6MB though.
 
cheers
colin hales

  _____  


agi |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/> |
<https://www.listbox.com/member/?&;> Modify Your Subscription

 <http://www.listbox.com> 

 

  _____  


agi |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/> |
<https://www.listbox.com/member/?&;> Modify Your Subscription

 <http://www.listbox.com> 

  _____  


agi |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/> |
<https://www.listbox.com/member/?&;
5> Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to