According to the dictionary of Philo of Mind Self awareness id synonymous
> > Would a artificial self-aware entity emerging from human technology
> > represent "mind"?
> (depends on YOUR definition of mind, of course) - but....
> self-aware? does that mean that if the program calls for some
> the computer will say "I rather play some Bach music now" and does so?
I 'm not convinced that being conscious, it necessarily follows that you are
contrarian as well.
But, one can imagine a "program" that monitors various inputs, external and
internal, that are reinforcing to varying degrees and prioritized via a some
"value" hierarchy most likely pre-supplied by the "masters". Of course,
where I'm going with this is "training the baby" and, of course, such
projects are currently underway at various locations and with varying
results(!). But it's early days. In any case, as I've alluded to in prior
posts, something that could "spot the dot" (manage convince a sufficient set
humans of it's self-awreness: see below) may emerge unannounced from the
collective tinkering underway. Only the Shadow knows.
> or would you assume in that (hard) AI to program EVERYTHING what a
> human (callable normal or derailed) might react by? We are back to the
> infinite time comp with unlimited memory.
> My limited little 'mind' does not go that far.
This would appear to assume that self-awareness equates to being human (as
in homo sapien?); I don't see that as the being the case. I certainly don't
believe the infinite time/memory device is required; maybe a Linux Beowulf
cluster running on some g5s?
Ultimately, I believe I'm self aware (although, decidedly not
self-actualized); I assume you are but can't prove it beyond doubt. For that
matter, given our mode of communication, you might indeed be a machine and,
in this case, just passing the Turing test. If you we here in person I might
sneak a red dot on your forehead (during a blink?), hold up a mirror and
watch your reaction (apparently passes muster regarding self-awareness in
our relatives). If some such procedure that meets at least that level of
acceptable evidence suffices for a chimp, then it should as well for an AI,
I should think.