John,
There are aspects of the intelligent understanding of the world
(universe of things and ideas) that can be modelled and simulated. I
think this is computable in an AI program except the problem of
complexity would slow the modelling down so much that it would not be
effective enough (at this time) to seem very intelligent. Not only do
we need to be able to abstract and communicate concepts we also need
to be able to integrate, synthesize, and test those abstractions.
Since these tests would not be exact, complete, or even very thorough
(for the most part) that means that our computer program would have to
be able to retain variations on these concepts and some knowledge how
the different variations might be used. If it weren't for the problem
of complexity the programmer could begin testing ways to do this
effectively.

But I still disagree with what you are saying. An artificial agent
will not be able to experience qualia because it will lack that
mysterious aspect of intelligence that allows us to sense certain
things in the way we do. So it would be able to distinguish blue from
red (as long as there was some kind of light to see the colors of
course) but it would not experience haptic sensory input in a way that
was fundamentally different from the way that it experiences red and
blue. For a computer it is just data. It may be presented in different
formats but there is no qualia (as I understand the concept.) It may
be programmed to simulate the communication as if it were dealing with
qualia, but it would be pure simulation.

>From Wikipedia: "
In philosophy and certain models of psychology, qualia (/ˈkwɑːliə/ or
/ˈkweɪliə/; singular form: quale) are defined to be individual
instances of subjective, conscious experience. The term qualia derives
from the Latin neuter plural form (qualia) of the Latin adjective
quālis (Latin pronunciation: [ˈkʷaːlɪs]) meaning "of what sort" or "of
what kind" in a specific instance like "what it is like to taste a
specific apple, this particular apple now".
Examples of qualia include the perceived sensation of pain of a
headache, the taste of wine, as well as the redness of an evening sky.
As qualitative characters of sensation, qualia stand in contrast to
"propositional attitudes", where the focus is on beliefs about
experience rather than what it is directly like to be experiencing.
Philosopher and cognitive scientist Daniel Dennett once suggested that
qualia was "an unfamiliar term for something that could not be more
familiar to each of us: the ways things seem to us"."

So qualia stands in contrast to propositional attitudes on the beliefs
about experience but it also, as I understand it, stands in contrast
to 'data' that may be transmitted by sensors (or the product of the
computational analysis of that data).

Can a computer program which feels no pain, pleasure, desire, joy,
curiosity, or relief be able to learn through conditioning? It can
learn through an artificial conditioning but since higher intelligence
requires the ability to think independently, the simulation of
conditioning will not hold the same qualia of experience that it holds
for us. I can predict that this is not that interesting to me as it
will be for you. But the question that does interest me more is
whether this lack of animal qualia of experience might make true
intelligence impossible for AI. An AI program is going to lack an
massive range of experience that it might talk about but never have
the understanding that comes from experience. I do not think that
means AGI is impossible except from a technical standpoint. It is an
entire range of general intelligence that it might know something
about but never experience. And it is an important part of animal
experience.

Jim Bromer

On Mon, Sep 24, 2018 at 7:20 AM John Rose <[email protected]> wrote:
> > -----Original Message-----
> > From: Matt Mahoney via AGI <[email protected]>
> >
> > I was applying John's definition of qualia, not agreeing with it. My 
> > definition is
> > qualia is what perception feels like. Perception and feelings are both
> > computable. But the feelings condition you to believing there is something
> > magical and mysterious about it.
> >
> 
> And what I'm saying is that the communication of qualia is important for 
> general intelligence in a system of agents. And how do agents interpret the 
> signals, process and recommunicate them.
> 
> But without fully understanding qualia since they're intimately intrinsic to 
> agent experience we can still explore their properties by answering questions 
> such as: What is an expression of the information distance between qualia of 
> differing agents with same stimuli? How do qualia map to modeled environment? 
> How do they change over time in a system of learning agents? What is the 
> compressional loss into communication? And how do multi-agent models change 
> over time from communicated and decompressed qualia.
> 
> And what is the topology of qualia variance within an agent related to the 
> complexity classes of environmental strategy?
> 
> And move on to questions such as can there be enhancements to agent language 
> to accelerate learning in a simulated system? And enhancements to agent 
> structure?
> 
> John
> 

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9c94dabb0436859d-Mdd614bab9f66bba7cccf8645
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to