Richard Loosemore wrote:
Colin Hales wrote:
Dear Richard,
I have an issue with the 'falsifiable predictions' being used as
evidence of your theory.
The problem is that right or wrong...I have a working physical model
for consciousness. Predictions 1-3 are something that my hardware can
do easily. In fact that kind of experimentation is in my downstream
implementation plan. These predictions have nothing whatsoever to do
with your theory or mine or anyones. I'm not sure about prediction 4.
It's not something I have thought about, so I'll leave it aside for
now. In my case, in the second stage of testing of my chips, one of
the things I want to do is literally 'Mind Meld', forming a bridge of
4 sets of compared, independently generated qualia. Ultimately the
chips may be implantable, which means a human could experience what
they generate in the first person...but I digress....
Your statement "This theory of consciousness can be used to make some
falsifiable predictions" could be replaced by "ANY theory of
consciousness can be used to make falsifiable predictions 1..4 as
follows.". Which basically says they are not predictions that falsify
anything at all. In which case the predictions cannot be claimed to
support your theory. The problem is that the evidence of predictions
1-4 acts merely as a correlate. It does not test any particular
critical dependency (causality origins). The predictions are merely
correlates of any theory of consciousness. They do not test the
causal necessities. In any empirical science paper the evidence could
not be held in support of the claim and they would be would be
discounted as evidence of your mechanism. I could cite 10 different
computationalist AGI knowledge metaphors in the sections preceding
the 'predictions' and the result would be the same.
So....If I was a reviewer I'd be unable to accept the claim that your
'predictions' actually said anything about the theory preceding them.
This would seem to be the problematic issue of the paper. You might
want to take a deeper look at this issue and try to isolate something
unique to your particular solution - which has a real critical
dependency in it. Then you'll have an evidence base of your own that
people can use independently. In this way your proposal could be
seen to be scientific in the dry empirical sense.
By way of example... a computer program is not scientific evidence
of anything. The computer materials, as configured by the program,
actually causally necessitate the behaviour. The program is a
correlate. A correlate has the formal evidentiary status of
'hearsay'. This is the sense in which I invoke the term 'correlate'
above.
BTW I have fallen foul of this problem myself...I had to look
elsewhere for real critical dependency, like I suggested above. You
never know, you might find one in there someplace! I found one after
a lot of investigation. You might, too.
Regards,
Colin Hales
Okay, let me phrase it like this: I specifically say (or rather I
should have done... this is another thing I need to make more
explicit!) that the predictions are about making alterations at
EXACTLY the boundary of the "analysis mechanisms".
So, when we test the predictions, we must first understand the
mechanics of human (or AGI) cognition well enough to be able to locate
the exact scope of the analysis mechanisms.
Then, we make the tests by changing things around just outside the
reach of those mechanisms.
Then we ask subjects (human or AGI) what happened to their subjective
experiences. If the subjects are ourselves - which I strongly suggest
must be the case - then we can ask ourselves what happened to our
subjective experiences.
My prediction is that if the swaps are made at that boundary, then
things will be as I state. But if changes are made within the scope
of the analysis mechanisms, then we will not see those changes in the
qualia.
So the theory could be falsified if changes in the qualia are NOT
consistent with the theory, when changes are made at different points
in the system. The theory is all about the analysis mechanisms being
the culprit, so in that sense it is extremely falsifiable.
Now, correct me if I am wrong, but is there anywhere else in the
literature where you have you seen anyone make a prediction that the
qualia will be changed by the alteration of a specific mechanism, but
not by other, fairly similar alterations?
Richard Loosemore
At the risk of lecturing the already-informed ---Qualia generation has
been highly localised into specific regions in *cranial *brain material
already. Qualia are not in the periphery. Qualia are not in the spinal
CNS, Qualia are not in the cranial periphery eg eyes or lips. Qualia are
generated in specific CNS cortex and basal regions. So anyone who thinks
they have a mechanism consistent with physiological knowledge could
conceive of alterations reconnecting periphery and cranial CNS processes
or merging brains and thereby swap/alter/share qualia or make new
qualia in the manner you describe. One kind of 'swap' of the kind you
speak of is the surgical cross-over of your right index and middle
finger nerves. You touch your middle finger.. you feel like your index
finger was touched. This is very old physiology.
The kinds of qualia swapping and merging effects you use are old news in
the sense of a plethora of thought experiments in the literature....The
uniqueness of your proposal is that you are at least attempting to
connect real physical phenomena in brain material to an AGI outcome. You
are also directly forcing the AGI discipline to deal with qualia
properly and explicitly. All this is good.
BUT...
If you are going to do this then you need to make sure it makes sense.
The predictions are blurry in that it is not clear where alterations to
a real brain are subsequently examined with some claimed equivalence in
an AGI. You start with "they will have to await the development of the
kind of nanotechnology that lets us rewire our brains on the fly"...
where we are clearly talking about brain material... then in the
predictions underneath you speak of 'concept atoms' where we are cannot
be in a real brain...so I assume someplace in the testing we are now
actually testing an AGI based on the 'concept atom' metaphor? Exactly
what is it that is being tested and when and who is judging what?
Can you clarify? Then I might be able to make sense of it. The more I
look the more confused I get.
cheers,
colin hales
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com