Hi Mike,
I can give the highly abridged flow of the argument:
!) It refutes COMP , where COMP = Turing machine-style abstract symbol
manipulation. In particular the 'digital computer' as we know it.
2) The refutation happens in one highly specific circumstance. In being
false in that circumstance it is false as a general claim.
3) The circumstances: If COMP is true then it should be able to
implement an artificial scientist with the following faculties:
(a) scientific behaviour (goal-delivery of a 'law of nature', an
abstraction BEHIND the appearances of the distal natural world, not
merely the report of what is there),
(b) scientific observation based on the visual scene,
(c) scientific behaviour in an encounter with radical novelty. (This
is what humans do)
The argument's empirical knowledge is:
1) The visual scene is visual phenomenal consciousness. A highly
specified occipital lobe deliverable.
2) In the context of a scientific act, scientific evidence is 'contents
of phenomenal consciousness'. You can't do science without it. In the
context of this scientific act, visual P-consciousness and scientific
evidence are identities. P-consciousness is necessary but on its own is
not sufficient. Extra behaviours are needed, but these are a secondary
consideration here.
NOTE: Do not confuse "scientific observation" with the "scientific
measurement", which is a collection of causality located in the distal
external natural world. (Scientific measurement is not the same thing as
scientific evidence, in this context). The necessary feature of a visual
scene is that it operate whilst faithfully inheriting the actual
causality of the distal natural world. You cannot acquire a law of
nature without this basic need being met.
3) Basic physics says that it is impossible for a brain to create a
visual scene using only the inputs acquired by the peripheral stimulus
received at the retina. This is due to fundamentals of quantum
degeneracy. Basically there are an infinite number of distal external
worlds that can deliver the exact same photon impact. The transduction
that occurs in the retinal rod/cones is entirely a result of protein
isomerisation. All information about distal origins is irretievably
gone. An impacting photon could have come across the room or across the
galaxy. There is no information about origins in the transduced data in
the retina.
That established, you are then faced with a paradox:
(i) (3) says a visual scene is impossible.
(ii) Yet the brain makes one.
(iii) To make the scene some kind of access to distal spatial relations
must be acquired as input data in addition to that from the retina.
(iv) There are only 2 places that can come from...
(a) via matter (which we already have - retinal impact at the
boundary that is the agent periphery)
(b) via space (at the boundary of the matter of the brain with
space, the biggest boundary by far).
So, the conclusion is that the brain MUST acquire the necessary data via
the spatial boundary route. You don't have to know how. You just have no
other choice. There is no third party in there to add the necessary data
and the distal world is unknown. There is literally nowhere else for the
data to come from. Matter and Space exhaust the list of options. (There
is alway magical intervention ... but I leave that to the space cadets.)
That's probably the main novelty for the reader to to encounter. But we
are not done yet.
Next empirical fact:
(v) When you create a turing-COMP substrate the interface with space is
completely destroyed and replaced with the randomised machinations of
the matter of the computer manipulating a model of the distal world. All
actual relationships with the real distal external world are destroyed.
In that circumstance the COMP substrate is implementing the science of
an encounter with a model, not an encounter with the actual distal
natural world.
No amount of computation can make up for that loss, because you are in a
circumstance of an intrinsically unknown distal natural world, (the
novelty of an act of scientific observation).
.
=> COMP is false.
======
OK. There are subtleties here.
The refutation is, in effect, a result of saying you can't do it
(replace a scientist with a computer) because you can't simulate inputs.
It is just the the nature of 'inputs' has been traditionally
impoverished by assumption born merely of cross-disciplinary blindness..
Not enough quantum mechanics or electrodynamics is done by those exposed
to 'COMP' principles.
This result, at first appearance, says "you can't simulate a scientist".
But you can! If you already know what is out there in the natural world
then you can simulate a scientific act. But you don't - by definition -
you are doing science to find out! So it's not that you can't simulate a
scientist, it is just that in order to do it you already have to know
everything, so you don't want to ... it's useless. So the words
'refutation of COMP by an attempted COMP implementation of a scientist'
have to be carefully contrasted with the words "you can't simulate a
scientist".
The self referential use of scientific behaviour as scientific evidence
has cut logical swathes through all sorts of issues. COMP is only one of
them. My AGI benchmark and design aim is "the artificial scientist".
Note also that this result does not imply that real AGI can only be
organic like us. It means that real AGI must have new chips that fully
capture all the inputs and make use of them to acquire knowledge the way
humans do. A separate matter altogether. COMP, as an AGI designer'
option, is out of the picture.
I think this just about covers the basics. The papers are dozens of
pages. I can't condense it any more than this..I have debated this so
much it's way past its use-by date. Most of the arguments go like this:
"But you CAN!...". I am unable to defend such 'arguments from
under-informed-authority' ... I defer to the empirical reality of the
situation and would prefer that it be left to justify itself. I did not
make any of it up. I merely observed. . ...and so if you don't mind I'd
rather leave the issue there. ..
regards,
Colin Hales
Mike Tintner wrote:
Colin:
1) Empirical refutation of computationalism...
.. interesting because the implication is that if anyone
doing AGI lifts their finger over a keyboard thinking they can be
directly involved in programming anything to do with the eventual
knowledge of the creature...they have already failed. I don't know
whether the community has internalised this yet.
Colin,
I'm sure Ben is right, but I'd be interested to hear the essence of
your empirical refutation. Please externalise it so we can internalise
it :)
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com