I completed the first draft of a technical paper on consciousness the
other day. It is intended for the AGI-09 conference, and it can be
found at:
Hi Richard,
I don't have any comments yet about what you have written, because I'm
not sure I fully understand what you're trying to say... I hope your
answers to these questions will help clarify things.
It seems to me that your core argument goes something like this:
That there are many concepts for which an introspective analysis can
only return the concept itself.
That this recursion blocks any possible explanation.
That consciousness is one of these concepts because "self" is inherently
recursive.
Therefore, consciousness is explicitly blocked from having any kind of
explanation.
Is this correct? If not, how have I misinterpreted you?
I have a thought experiment that might help me understand your ideas:
If we have a robot designed according to your molecular model, and we
then ask the robot "what exactly is the nature of red" or "what is it
like to experience the subjective essense of red", the robot may analyze
this concept, ultimately bottoming out on an "incoming signal line".
But what if this robot is intelligent and can study other robots? It
might then examine other robots and see that when their analysis bottoms
out on an "incoming signal line", what actually happens is that the
incoming signal line is activated by electromagnetic energy of a certain
frequency, and that the object recognition routines identify patterns in
"signal lines" and that when an object is identified it gets annotated
with texture and color information from its sensations, and that a
particular software module injects all that information into the
foreground memory. It might conclude that the experience of
"experiencing red" in the other robot is to have sensors inject atoms
into foreground memory, and it could then explain how the current
context of that robot's foreground memory interacts with the changing
sensations (that have been injected into foreground memory) to make that
experience 'meaningful' to the robot.
What if this robot then turns its inspection abilities onto itself? Can
it therefore further analyze "red"? How does your theory interpret that
situation?
-Ben
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com