Bruno,

It well may be that you are right. The problem from my side is that my knowledge of mathematics is not enough to understand you. For example

> What do you mean? Some robot, running some program can be conscious,
> like us (assuming mechanism).

I do not know what conscious means in this context. It well might be that the meaning of conscious in your statement is different from that in Jeffrey Gray's book. It could be it is the same. I do not know, I do not understand what a Lobian machine is. It is above my current knowledge, hence I cannot comment on this.

In my view, to spread your knowledge it would be good to write a book.

Evgenii

On 14.08.2011 14:55 Bruno Marchal said the following:

On 13 Aug 2011, at 16:47, Evgenii Rudnyi wrote:

On 13.08.2011 14:08 Stathis Papaioannou said the following:
On Sat, Aug 13, 2011 at 9:45 PM, Evgenii
Rudnyi<use...@rudnyi.ru> wrote:
If your visual cortex is replaced by an electronic device
that produces the appropriate outputs at its borders, the
rest of your brain will respond normally.

This is just an assumption. I guess that at present one cannot
prove or disprove it. Let me quote an opposite assumption from
Jeffrey Gray (p. 232, section 15.5 Consciousness in a brain
slice?)

How could the rest of your brain possibly respond differently if
it receives exactly the same stimulation? Perhaps you mean that
it would be able to tell that there is an artificial device there
due to electric fields and so on; but in that case the artificial
device is not appropriately reproducing the I/O behaviour of the
original tissue.

The question is what does it mean the same stimulation. I guess
that you mean now only electrical signals. However, it well might
be the qualia plays the role as well.

If I understand you correctly, you presume that conscious
experience could be resolved within 'normal science' (there is no
Hard Problem).

The hard problem can be formulated in 'normal science'. But it is a
taboo subject.



Jeffrey Gray on the other hand acknowledges the Hard Problem and he
 believes that a new scientific theory will be needed to solve it.

The theory exists. Computer science and mathematical logic can be
used to formulate precisely the hard problem. And to solve it
including a 'meta-solution' of the hard part of it.

But the result might contradict the prejudices of the Aristotelians
(the believer in substances, aware or not aware that such a belief is
of the type 'theological').

We don't need a new science, we need only to get back to science.
This means to make always explicit the ontological assumption when
applying a theory to an idea of what reality can be. But most
materialists scientist hates the idea of making explicit that they
*assume* a basic ontologically real (existing) physical universe.
From a computationalist perspective, with respect to the mind body
problem, this is a god-of-the-gap type of use of the notion of
physical universe. Comp transforms the "hard" problem of
consciousness into an "easy" problem of matter, easy because it is
soluble in computer science (even in number theory).

I think few people realize the impact of the discovery of the
universal machine (or if you prefer the discovery of the Post Church
Kleene Turing Markov thesis).




"Might it be the case that, if one put a slice of V4 in a dish
in this way, it could continue to sustain colour qualia?
Functionalists have a clear answer to this question: no,
because a slice of V4, disconnected from its normal visual
inputs and motor outputs, cannot discharge the functions
associated with the experience of colour. But, if we had a
theory that started, not from function, but from brain tissue,
maybe it would give a different answer. Alas, no such theory is
to hand. Worse, even one had been proposed, there is no known
way of detecting qualia in a brain slice!".

It's not clear that an isolated piece of brain tissue would have
normal qualia since it may require the whole brain or at least a
large part of the brain to produce qualia. A neuron in the
language centre won't have an understanding of a small part of
the letter "a".

We do not know this now. It was just an idea in the book (among
many other ideas). It seems to me though that such an idea is at
the same level as to suppose that a robot will have conscious
experience automatically.

What do you mean? Some robot, running some program can be conscious,
 like us (assuming mechanism). This does not mean that *any* robot
would be able to think.

Bruno


http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to