On 4/4/2012 11:58 AM, Evgenii Rudnyi wrote:
The term late error detection as such could be employed without consciousness indeed. Yet, Jeffrey Gray gives it some special meaning that I will try briefly describe below.

Jeffrey Gray in his book speaks about conscious experience, that is, exactly about qualia. Self, mind, and intellect as such is not there.

He has tried first hard to put conscious experience in the framework of the normal science (I guess that he means here physicalism) but then he shows that conscious experience cannot be explained by the theories within a normal science (functionalism, neural correlates of consciousness, etc.).

According to him, conscious experience is some multipurpose display. It is necessary yet to find how Nature produces it but at the moment this is not that important.

Display to whom?  the homunculus?

He considers an organism from a cybernetic viewpoint, as a bunch of feedback mechanisms (servomechanisms). For a servomechanism it is necessary to set a goal and then to have a comparator that compares the goal with the reality. It might function okay at the unconscious level but conscious experience binds everything together in its display.

But why is the binding together conscious?

This binding happens not only between different senses (multimodal binding) but also within a single sense (intramodel binding). For example we consciously experience a red kite as a whole, although in the brain lines, colors, surfaces are processed independently. Yet we cannot consciously experience a red kite not as a whole, just try it.

Actually I can. It takes some practice, but if, for example, you are a painter you learn to see things a separate patches of color. As an engineer I can see a kite as structural and aerodynamic elements.

Hence the conscious display gives a new opportunity to compare expectations with reality and Jeffrey Grayrefers to it as late error detection.

But none of that explains why it is necessarily conscious. Is he contending that any comparisons of expectations with reality instantiates consciousness? So if a Mars Rover uses some predictive program about what's over the hill and then later compares that with what is over the hill it will be conscious?

That is, there is a bunch of servomechanisms that are running on their own but then conscious experience allows brain to synchronize everything together. This is a clear advantage from the Evolution viewpoint.

It's easy to say consciousness does this and that and to argue that since these things are evolutionarily useful that's why consciousness developed. But what is needed is saying why doing this and that rather than something else instantiates consciousness.

It seems that Gray is following my idea that the question of qualia, Chalmer's 'hard problem', will simply be bypassed. We will learn how to make robots that act conscious and we will just say consciousness is just an operational attribute.



You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to