On 6/23/2011 10:29 AM, Rex Allen wrote:
On Tue, Jun 21, 2011 at 1:44 PM, meekerdb<meeke...@verizon.net>  wrote:
On 6/21/2011 8:17 AM, Bruno Marchal wrote:
But comp denies that "we can prove that a machine can think". Of course we
can prove that some machine has this or that competence. But for
intelligence/consciousness, this is not possible. (Unless we are not
machine. Some non-machine can prove that some machine are intelligent, but
this is purely academical until we find something which is both a person and
a non-machine).
But of course we can prove that a machine can think to the same degree we
can prove other people think.  That we cannot prove it from some
self-evident set of axioms is completely unsurprising.  This comports with
my idea that with the development of AI the "question of consciousness" will
come to be seen as a archaic, like "What is life?".
Actually, I think you may have a point.  The question of "what is
life" is really not a scientific question.  Yet, nevertheless, I am

In the same way, the question of "what is consciousness" is not a
scientific question either.  And yet, I am conscious.  Consciousness

Science is just not applicable to these questions, because these
questions have nothing to do with the core purposes of science:

"Instrumentalism is the view that a scientific theory is a useful
instrument in understanding the world. A concept or theory should be
evaluated by how effectively it explains and predicts phenomena, as
opposed to how accurately it describes objective reality."

So science is about formulating frameworks for understanding
observations in a way that allows for accurate prediction.  To ascribe
"metaphysical truth" to any of these frameworks is to take a leap of
faith *beyond* science.

Taking the view of "instrumentalism with a pinch of common sense",
there is no reason to believe that every question that can be asked
can be answered scientifically.  Is there?

Unless and until consciousness can be made useful for prediction, it
will remain invisible and irrelevant to science.


But I think it will be useful for prediction, as it is already useful in predicting what other people will do; what Dennett calls "the intentional stance". We already do it for machines, we know what their sensors detect and so we attribute "awareness" of some signals to them. We anthropomorphize them and this has some value in predicting their behavior. As AI becomes an engineering discipline we will develop fine distinctions between different kinds of awareness and how they can be implemented. "Where is its consciousness?" will seems as archaic a question as looking at an automobile and asking "Where is its animation?" As Bruno points out, one will never be able to prove, in the mathematical sense of proof, that a given entity is conscious. But as we do with other people, we will prove, in the scientific sense, that they are. In Bruno's logical hierarchy, as I understand it, there are not many different kinds of consciousness, only awareness and self-awareness. But I think there there are other useful distinctions that Bruno would lump together as mere competences.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to