On 10/12/2018 9:06 PM, Philip Thrift wrote:
On Friday, October 12, 2018 at 4:09:03 PM UTC-5, Brent wrote:
On 10/10/2018 4:12 PM, Pierz wrote:
It's not intelligent behaviour. There are tons of things (human
artifacts that have been created to automate certain complex
input-output systems) that exhibit complex, intelligent-ish
behaviour that I seriously doubt have any more sentience than a
rock, though I'm open to the possibility of some sentience in
rocks. My "method of determining if something is conscious" is
the same as most people who don't believe their smart phones are
having experiences. It's being a biological organism with a
nervous system, though again, I'm agnostic on organisms like
trees. When /you're/ not being a philosopher I bet that's your
real criterion too! You're not worrying about killing your
smartphone when you trash it for the next model.
Of course this is based on a guess, as yours is. My lack of a
good theory of the relationship between matter and mind does not
force me into solipsism because the absence of a test proves
nothing about reality. Things are as they are. All people are
conscious, I assume. Probably all animals. Possibly plants and
rocks and stars and atoms, in some very different way from us.
Whatever way it is, it /is/ that way regardless of whether I can
devise a test for it, even in principle.
I generally agree. I like to resort to my intuition pump, the AI
Mars Rover, because I think the present Mars Rovers have a tiny
bit of intelligence and a corresponding tiny bit of
consciousness. Their intelligence is in their navigation,
deployment of instruments, self-monitoring, and reporting to JPL.
They make some decisions about these things, but they don't learn
from experience so they probably at the level of some insects or
spiders, except that they have more language with which they
communicate with JPL. But an AI Mars Rover that was designed to
learn from experience would, I think, be made conscious in some
degree. This is because it will need to remember experiences and
recall relevant ones when faced with unusual problems. Solving
the problem by using experience means having a self-model in a
simulation to try to foresee the outcome of different choices. I
think that's the essence of basic consciousness, learning from
experience and self-modeling as part of decisions.
Brent
Still, purely/*informational*/ processing, which includes intentional
agent programming (learning from experience, self-modeling), I don't
think captures all true /*experiential*/ processing (phenomenal
consciousness).
https://plato.stanford.edu/entries/consciousness-intentionality/
/To say you are in a state that is (phenomenally) conscious is to
say—on a certain understanding of these terms—that you have an
experience, or a state there is something it’s like for you to be in.
Feeling pain or dizziness, appearances of color or shape, and episodic
thought are some widely accepted examples. Intentionality, on the
other hand, has to do with the directedness, aboutness, or reference
of mental states—the fact that, for example, you think of or about
something. Intentionality includes, and is sometimes seen as
equivalent to, what is called “mental representation”./
If an AIMR runs a simulation in which it models itself in order to
decide on a course of action isn't that "directedness",
"intentionality", and "mental representation"?
When it stores into memory incidents to learn from it must also
associate some valuation to the outcome of the incident. Isn't that a
"feeling" about it? If it lost a wheel wouldn't it feel something
analogous to "pain"? Damasio says that human emotion is just perception
of internal states, e.g. hormones, etc.
Brent
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.