On 05.04.2012 20:07 meekerdb said the following:
On 4/4/2012 11:58 AM, Evgenii Rudnyi wrote:
The term late error detection as such could be employed without
consciousness indeed. Yet, Jeffrey Gray gives it some special meaning
that I will try briefly describe below.

Jeffrey Gray in his book speaks about conscious experience, that is,
exactly about qualia. Self, mind, and intellect as such is not there.

He has tried first hard to put conscious experience in the framework
of the normal science (I guess that he means here physicalism) but
then he shows that conscious experience cannot be explained by the
theories within a normal science (functionalism, neural correlates of
consciousness, etc.).

According to him, conscious experience is some multipurpose display.
It is necessary yet to find how Nature produces it but at the moment
this is not that important.

Display to whom? the homunculus?

No, he creates an interesting scheme to escape the homunculus:

p. 110. “(1) the unconscious brain constructs a display in a medium, that of conscious perception, fundamentally different from its usual medium of electrochemical activity in and between nerve cells;

(2) it inspects the conscious constructed display;

(3) it uses the results of the display to change the working of its usual electrochemical medium.”

Hence the unconscious brain does the job. I should say that this does not answer my personal inquiry on how I perceive a three dimensional world, but this is another problem. In his book, Jeffrey Gray offers quite a plausible scheme.


He considers an organism from a cybernetic viewpoint, as a bunch of
feedback mechanisms (servomechanisms). For a servomechanism it is
necessary to set a goal and then to have a comparator that compares
the goal with the reality. It might function okay at the unconscious
level but conscious experience binds everything together in its display.

But why is the binding together conscious?

There is no answer to this question yet. This is just his hypothesis based on experimental research. In a way, this is a description of experiments. The question why requires a theory, it is not there yet.

This binding happens not only between different senses (multimodal
binding) but also within a single sense (intramodel binding). For
example we consciously experience a red kite as a whole, although in
the brain lines, colors, surfaces are processed independently. Yet we
cannot consciously experience a red kite not as a whole, just try it.

Actually I can. It takes some practice, but if, for example, you are a
painter you learn to see things a separate patches of color. As an
engineer I can see a kite as structural and aerodynamic elements.

If you visually experiences this indeed, it might be good to make a MRI test to see the difference with others. This way you will help to develop the theory of consciousness.

I understand what you say and I can imagine a kite as a bunch of masses, springs and dampers but I cannot visually experience this when I observe the kite. I can visually experience this only when I draw it on a paper.


Hence the conscious display gives a new opportunity to compare
expectations with reality and Jeffrey Grayrefers to it as late error
detection.

But none of that explains why it is necessarily conscious. Is he
contending that any comparisons of expectations with reality
instantiates consciousness? So if a Mars Rover uses some predictive
program about what's over the hill and then later compares that with
what is over the hill it will be conscious?

He just describes experimental results. He has conscious experience, he has a brain, MRI shows activities in the brain, then another person in similar circumstances shows a similar activities in the brain and states that he has conscious experience. Hence it is logical to suppose that brain produces conscious experience.

There is no discussion in his book whether this is necessarily conscious. There are no experimental results to discuss that. As for Mars Rover, in his book there is a statement that ascribing consciousness to robots is not grounded scientifically. There are no experimental results in this respect to discuss.

That is, there is a bunch of servomechanisms that are running on their
own but then conscious experience allows brain to synchronize
everything together. This is a clear advantage from the Evolution
viewpoint.

It's easy to say consciousness does this and that and to argue that
since these things are evolutionarily useful that's why consciousness
developed. But what is needed is saying why doing this and that rather
than something else instantiates consciousness.

This remains as Hard Problem. There is no solution of that in the book.

It seems that Gray is following my idea that the question of qualia,
Chalmer's 'hard problem', will simply be bypassed. We will learn how to
make robots that act conscious and we will just say consciousness is
just an operational attribute.

No, his statement is that this phenomenon does not fit in the normal science. He considers current theories of consciousness including ephiphenomenalism, functionalism, neural correlate of consciousness and his conclusion is that this theories cannot describe observations, that is, Hard Problem remains.

Evgenii

Brent


Evgenii


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to