[EMAIL PROTECTED] wrote:
Jeanne Houston wrote:

I am a layperson who reads these discussions out of avid interest,

and I

hope that someone will answer a question that I would like to ask in

order

to enhance my own understanding.
There is an emphasis on AI running through these discussions, yet

you

seem to delve into very philosophical questions.  Are the philosophical
discussions applicable to the development of AI (i.e., trying to grasp

all

aspects of the mind of man if you are trying to develop a true copy),

or are

they only interesting diversions that pop-up from time to time. My

thanks

to anyone who wishes to respond.

Jeanne Houston


My answer is probably too short, but I want to take the risk of being misinterpreted in order to be plain:

We can't JUST DO things (like AI). Whenever we DO things, we are THINKING ABOUT them. I'd venture to say that HOW WE THINK ABOUT THINGS (e.g. philosophy, epistemology, etc.) is even MORE important that DOING THINGS (engineering, sales, etc.). That is one way of looking at the advantage that we humans have over machines. We have the capability to not just do things, but to know why we are doing them. This runs counter to the whole PHILOSOPHY (mind you) of modern science, that we are simply machines, and that there is no WHY. This modern philosophy, if taken to its extreme, is the death of the humanness.

Tom  Caylor

I think you've got it the wrong way 'round. The view of modern science is that we are machines and machines can do philosophy and know they are doing it and can have reasons why. It is the death of human hubris - which may eventually succumb to the wounds it has received since Copernicus.

Brent Meeker

Reply via email to