=======Colin said======>

The tacit assumption is that the model's thus implemented on a computer
will/can 'behave' indistinguishably from the real thing, when what you are
observing is a model of the real thing, not the real thing.

===ED's reply===>

I was making no assumption that the model would be behave indistinquisably
from the real thing, but instead only that there were meaningful --- and,
from a cross-fertilization standpoint, informative --- levels of description
at which the computer model and the corresponding brain behavior were
similar.

 


=======Colin said======>

There's a boundary to cross - when you claim to have access to human level
intellect - then you are demanding a equivalence with a real human, not a
model of a human. 

===ED's reply===>

When I, and presumably many other AGIers, say "human-level AGI", we do not
mean an exact functional replica of the human brain or mind.  Rather we mean
an AGI that can do things like speak and understand natural language, see
and understand the meaning of its visual surroundings, reason from the rough
equivalent of human-level world knowledge, have common sense, do creative
problems solving, and other mental tasks --- substantially as well as most
people.  Its methods of computation do not have to be exactly like those
used in the mind, the major issue is that its competencies be at least
roughly as good over a range of talents. 

 

=======Colin said======>

I don't think there's any real issue here. Mostly semantics being mixed a
bit.

Gotta get back to xmas! Yuletide stuff to you. 

===ED's reply===>

Agreed.

 

Ed Porter

 

 

-----Original Message-----
From: Colin Hales [mailto:[email protected]] 
Sent: Tuesday, December 23, 2008 7:55 PM
To: [email protected]
Subject: Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a
machine that can learn from experience

 

Ed,
Comments interspersed below:

Ed Porter wrote: 

Colin,

 

Here are my comments re  the following parts of your below post:

 

=======Colin said======>

I merely point out that there are fundamental limits as to how computer
science (CS) can inform/validate basic/physical science - (in an AGI
context, brain science). Take the Baars/Franklin "IDA" project.. It predicts
nothing neuroscience can poke a stick at.

 

===ED's reply===>

Different AGI models can have different degrees of correspondence to, and
different explanatory relevance to, what is believed to take place in the
brain.  For example the Thomas Serre's PhD thesis "Learning a Dictionary of
Shape-Components in Visual Cortex:  Comparison with Neurons, Humans and
Machines," at from
<http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf
>
http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf
, is a computer simulation which is rather similar to my concept of how a
Novamente-like AGI could perform certain tasks in visual perception, and yet
it is designed to model the human visual system to a considerable degree.
It shows that a certain model of how Serre and Poggio think a certain aspect
of the human brain works, does in fact work surprisingly well when simulated
in a computer.

 

A surprisingly large number of brain science papers are based on computer
simulations, many of which are substantially simplified models, but they do
given neuroscientists a way to poke a stick at various theories they might
have for how the brain operates at various levels of organization.  Some of
these papers are directly relevant to AGI.  And some AGI papers are directly
relevant to providing answers to certain brain science questions.

You are quite right! Realistic models can be quite informative and feed back
- suggesting new empirical approaches. There can be great
cross-fertilisation.

However the point is irrelevant to the discussion at hand.

 The phrase "does in fact work surprisingly well when simulated in a
computer" illustrates the confusion. 'work'? according to whom?
"surprisingly well"? by what criteria? The tacit assumption is that the
model's thus implemented on a computer will/can 'behave' indistinguishably
from the real thing, when what you are observing is a model of the real
thing, not the real thing.

<HERE> If you targeting AGI with a benchmark/target of human intellect or
problem solving skills, then the claim made on any/all models is that models
can attain that goal. A computer implements a model. To make a claim that a
model  completely captures the reality upon which it was based, you need to
have a solid theory of the relationships between models and reality that is
not wishful thinking or assumption, but solid science. Here's where you run
into the problematic issue that basic physical sciences have with models.  

There's a boundary to cross - when you claim to have access to human level
intellect - then you are demanding a equivalence with a real human, not a
model of a human. 

 

=======Colin said======>

I agree with your :

"At the other end of things, physicists are increasingly viewing physical
reality as a computation, and thus the science of computation (and
communication which is a part of it), such as information theory, have begun
to play an increasingly important role in the most basic of all sciences."


===ED's reply===>

We are largely on the same page here

 

=======Colin said======>

I disagree with:

"But the brain is not part of an eternal verity.  It is the result of the
engineering of evolution. "

Unless I've missed something ... The natural evolutionary 'engineering' that
has been going on has not been the creation of a MODEL (aboutness) of things
- the 'engineering' has evolved the construction of the actual things. The
two are not the same. The brain is indeed 'part of an eternal verity' - it
is made of natural components operating in a fashion we attempt to model as
'laws of nature',,,

 

===ED's reply===>

If you define engineering as a process that involves designing something in
the abstract --- i.e., in your "a MODEL (aboutness of things)" --- before
physically building it, you could claim evolution is not engineering.  

 

But if you define engineering as the designing of things (by a process that
has intelligence what ever method) to solve a set of problems or
constraints, evolution does perform engineering, and the brain was formed by
such engineering.

 

How can you claim the human brain is an eternal verity, since it is only
believed that it has existing in anything close to its current form in the
last 30 to 100 thousand years, and there is no guarantee how much longer it
will continue to exists.  Compared to much of what the natural sciences
study, its existence appears quite fleeting.

I think this is just a terminology issue. The 'laws of nature' are the
eternal verity, to me. The dynamical output they represent - of course that
does whatever it does. The universe is an intrinsically dynamic entity at
all levels. Even the persistent expression of total randomness is an
'eternal verity'. No real issue here.



 

=======Colin said======>

Anyway, for these reasons, folks who use computer models to study human
brains/consciousness will encounter some difficulty justifying, to the basic
physical sciences, claims made as to the equivalence of the model and
reality. That difficulty is fundamental and cannot be 'believed away'. 

 

===ED's reply===>

If you attend brain science lectures and read brain science literature, you
will find that computer modeling is playing an ever increasing role in brain
science --- so this basic difficulty that you describe largely does not
exist.

 

I think you've missed the actual point at hand for the reasons detailed
<HERE>. 



=======Colin said======>

The intelligence originates in the brain. AGI and brain science must be
literally joined at the hip or the AGI enterprise is arguably scientifically
impoverished wishful thinking. 

===ED's reply===>

I don't know what you mean by "joined at the hip," but I think it is being
overly anthropomorphic to think an artificial mind has to slavishly model a
human brain to have great power and worth.  

 

But I do think it would probably have to accomplish some of the same general
functions, such as automatic pattern learning, credit assignment, attention
control, etc.

 

Ed Porter

We are all enthusiastically intent on creating artificial entities with some
kind of usefulness (=great power and worth).  However, AGI is Artificial
General Intelligence, seeks to create power and worth through a claim that
'general intelligence' has been delivered. This is not merely the "same
general functions"; it is actual general intelligence. The statement "A
model of general intelligence" is oxymoronic. If you can deliver general
intelligence then you are not delivering a model of it, you are delivering
actual general intelligence. To use models as a basis for it you need to
have a scientific basis for a claim that the models that have been used to
implement the AGI can (in theory) deliver identical behaviour = general
intelligence. Models of a human brain could be involved. Models of outward
human behaviour could be involved. ... in any case - Each AGI-er needs to
have a cogent, scientifically based claim in respect of the models as
deliverers of the claimed outcomes - or the beliefs underlying the AGI-er's
approach have a critical weakness in the eyes of science. 

I don't think there's any real issue here. Mostly semantics being mixed a
bit.

Gotta get back to xmas! Yuletide stuff to you. 

Colin

  _____  


agi |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/> |
<https://www.listbox.com/member/?&;
5> Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to