At the very least could it be said the AI is conscious of the question?
Would this awareness of even a single piece of information be sufficient to
make it conscious?
Jason
On 6/2/07, Hal Finney [EMAIL PROTECTED] wrote:
Various projects exist today aiming at building a true Artificial
Consciousness is a cognitive system capable of reflecting on other
cognitive systems, by enabling switching and integration between
differing representations of knowledge in different domains. It's a
higher-level summary of knowledge in which there is a degree of coarse
graining sufficient to
On 03/06/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
The third type of conscious mentioned above is synonymous with
'reflective intelligence'. That is, any system successfully engaged
in reflective decision theory would automatically be conscious.
Incidentally, such a system would also be
On Jun 3, 9:20 pm, Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 03/06/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
The third type of conscious mentioned above is synonymous with
'reflective intelligence'. That is, any system successfully engaged
in reflective decision theory would
On 03/06/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
How do you derive (a) ethics and (b) human-friendly ethics from reflective
intelligence? I don't see why an AI should decide to destroy the world,
save the world, or do anything at all to the world, unless it started
off
with axioms
What do others on this list think about Mark Tegmark's definition of
consciousness:
I believe that consciousness is, essentially, the way information
feels when being processed. Since matter can be arranged to process
information in numerous ways of vastly varying complexity, this
implies a rich
Part of what I wanted to get at in my thought experiment is the
bafflement and confusion an AI should feel when exposed to human ideas
about consciousness. Various people here have proffered their own
ideas, and we might assume that the AI would read these suggestions,
along with many other
Why would we have a word that intuitively everybody can grasp for himself
without it being linked to a real phenomena ?
Not only we have one word, but we have plenty of words which try to grasp the
idea. Denying consciousness phenomena like this is playing a vocabulary
game... not denying the
Hal Finney wrote:
Part of what I wanted to get at in my thought experiment is the
bafflement and confusion an AI should feel when exposed to human ideas
about consciousness. Various people here have proffered their own
ideas, and we might assume that the AI would read these suggestions,
Sorry about the previous post... I did it from the the Google
listsomething weird happened.
---
Hi folks,
Re: How would a computer know if it were conscious?
Easy.
The computer would be able to go head to head with a human in a competition.
The
I don't see that you've made your point. If you achieve this, you have
created an artificial creative process, a sort of holy grail of
AI/ALife. However, it seems far from obvious that consciousness should
be necessary. Biological evolution is widely considered to be creative
(even exponentially
If it feels bafflement and confusion, then surely it is conscious :)
An AI that takes information from books might experience similar qualia we
can experience. The AI will be programmed to do certain tasks and it must
thus have a notion of what it is doing is ok., not ok, or completely wrong.
12 matches
Mail list logo