Colin,

 

>From a quick read, the gist of what your are saying seems to be that AGI is
just "engineering", i.e., the study of what man can make and the properties
thereof, whereas "science" relates to the eternal verities of reality.

 

But the brain is not part of an eternal verity.  It is the result of the
engineering of evolution.  

 

At the other end of things, physicists are increasingly viewing physical
reality as a computation, and thus the science of computation (and
communication which is a part of it), such as information theory, have begun
to play an increasingly important role in the most basic of all sciences.

 

And to the extent that the study of the human mind is a "science", then the
study of the types of computation that are done in the mind is part of that
science, and AGI is the study of many of the same functions.

 

So your post might explain the reason for a current cultural divide, but it
does not really provide a justification for it.  In addition, if you attend
events at either MIT's brain study center or its AI center, you will find
many of the people who are there are from the other of these two centers,
and that there is a considerable degree of cross-fertilization there that I
have heard people at such event describe the benefits of.

 

Ed Porter

 

 

-----Original Message-----
From: Colin Hales [mailto:c.ha...@pgrad.unimelb.edu.au] 
Sent: Monday, December 22, 2008 6:19 PM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a
machine that can learn from experience

 

Ben Goertzel wrote: 

 

On Mon, Dec 22, 2008 at 11:05 AM, Ed Porter <ewpor...@msn.com> wrote:

Ben,

 

Thanks for the reply.

 

It is a shame the brain science people aren't more interested in AGI.  It
seems to me there is a lot of potential for cross-fertilization.



I don't think many of these folks have a principled or deep-seated
**aversion** to AGI work or anything like that -- it's just that they're
busy people and need to prioritize, like all working scientists

There's a more fundamental reason: Software engineering is not 'science' in
the sense understood in the basic physical sciences. Science works to
acquire models of empirically provable critical dependencies (apparent
causal necessities). Software engineering never delivers this. The result of
the work, however interesting and powerful, is a model that is, at best,
merely a correlate of some a-priori 'designed' behaviour. Testing to your
own specification is a normal behaviour in computer science. This is not the
testing done in the basic physical science - they 'test' (empirically
examine) whatever is naturally there - which is, by definition, a-priori
unknown. 

No matter how interesting it may be, software tells us nothing about the
actual causal dependencies. The computer's physical hardware (semiconductor
charge manipulation), configured as per the software, is the actual and
ultimate causal necessitator of all the natural behaviour of hot rocks
inside your computer. Software is MANY:1 redundantly/degenerately related to
the physical (natural world) outcomes. The brilliantly useful
'hardware-independence' achieved by software engineering and essentially
analogue electrical machines behaving 'as-if' they were digital - so
powerful and elegant - actually places the status of the software activities
outside the realm of any claims as causal.

This is the fundamental problem that the  basic physical sciences have with
computer 'science'. It's not, in a formal sense a 'science'. That doesn't
mean CS is bad or irrelevant - it just means that it's value as a revealer
of the properties of the natural world must be accepted with appropriate
caution. 

I've spent 10's of thousands of hours testing software that drove all manner
of physical world equipment - some of it the size of a 10 storey building. I
was testing to my own/others specification. Throughout all of it I knew I
was not doing science in the sense that scientists know it to be. The mantra
is "correlation is not causation" and it's beaten into scientist pups from
an early age. Software is a correlate only - it 'causes' nothing. In
critical argument revolving around claims in respect of software as
causality  - it would be defeated in review every time. A scientist,
standing there with an algorithm/model of a natural world behaviour, knows
that the model does not cause the behaviour. However, the scientist's model
represents a route to predictive efficacy in respect of a unique natural
phenomenon. Computer software does not predict the causal origination of the
natural world behaviours driven by it. 10 compilers could produce 10
different causalities on the same computer. 10 different computers running
the same software would produce 10 different lots of causality.

That's my take on why the basic physical sciences may be under-motivated to
use AGI as a route to the outcomes demanded of their field of interest =
'Laws/regularities of Nature'. It may be that computer 'science' generally
needs to train people better in their understanding of science. As an
engineer with a foot in both camps it's not so hard for me to see this. 

Randalf Beer called software "tautologous" as a law of nature... I think it
was here:
Beer, R. D. (1995). A Dynamical-Systems Perspective on Agent Environment
Interaction. Artificial Intelligence, 72(1-2), 173-215.
I have a .PDF if anyone's interested...it's 3.6MB though.
 
cheers
colin hales

  _____  


agi |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/> |
<https://www.listbox.com/member/?&;
5> Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to