On 12 Feb 2015, at 11:11, LizR wrote:
On 12 February 2015 at 22:50, Bruno Marchal <[email protected]> wrote:
Emotion provides an efficacious way to retrieve self-satisfaction,
by bypassing reason, which would be too much slow.
We are "programmed" (by evolution, perhaps) to dislike anything
threatening our satisfaction. That is why a burn is painful, and a
good meal is pleasant. So we are driving by good and bad. We tend to
get the good, and to be away from the bad. That are the basic
emotion at the heart of all our behaviors. Now, we have evolved into
very complex relationships with nature and with ourselves, and the
emotions can become complex and conflictual, notably with conflicts
between shorterm goal (I want the pleasure of smoking a cigarette)
and longterm goal (I don't want to die from a painful disease
related to the cigarette).
If Mars Rover has enough self-reference, a conflict between
different subgoal can happen, like I want to go there quickly, but I
hesitate to take the shorter path as it is near a dangerous
crevasse. In such case, it might behave (at least) like it has
emotions: hesitation, failed attempts in quick succession, etc.
Emotions are daughter of the qualia of pain and pleasure, related to
self-satisfaction and survival. You will put your hand oout of the
fire more quickly than after reasoning that it could harm you, but
with a lesson well memorized, like : fire hurts, not do that
again, ...
It sounds to me as though in order to be motivated to act, you need
some sort of stimulus (eg pain, pleasure) and you would think that
therefore you need to be aware of that stimulus.
I agree. But you need more than just aware of the stimulus, you need
to interpret it as pleasant and/or unpleasant, especially for the long
term. For simple direct avoidance, reflex are enough. For the long
term, you might have the conflict with the pleasure in the short term
(like with smoking cigarette, ...).
But I guess some simple systems do this by reflex (insects, rovers,
pulling hand from fire before the pain registers consciously).
The pain coming after is an investment in the future, which is quite
useful for the perpetuation of the complex social species, plausibly.
This does not explain entirely why pain is felt as painful, though.
So maybe you don't need to be conscious to be motivated, in a simple
sense.
OK. Our basic motivations are instinct. We are self-satisfied, at the
basic simple level when a number of beliefs/goal are satisfied, like
"if hungry: hunt and feed", "if theatened, fight or run" , "if
thirsty, drink", "if bored, do something", etc.
This implies implicitly an anticipation that you can do something to
satisfy the goal, which is an implicit belief that there is a reality,
and that makes it possibly selected from the "universal consciousness"
of the (Church-Turing) universal machine.
Self-consciousness is when this becomes explicit, through more
powerful cognitive abilities.
I think you get it when you add the induction axioms, which gives to
the machine the ability to justify generalisation, or proof of
universal statement (like for all n and m, n+m=m+n).
But the induction axioms are limitation axioms. In a sense, there are
already delusional, and that is why I don't put them in the ontology.
Then, in that ontology, we can prove the existence of machines which
do those generalizations, and their many-histories can be
particularized and guide the universal consciousness of the universal
person. It is a concretization.
I see more that universal person more like an abstract universal baby,
virtuous by innocence, than as an "accomplished God".
An accomplished god would be a maximally correct extension of such a
baby, but it is an open difficult question to me if that is still a
person. Then "maximal" can be extended to the analytical truth, but
then in many different ways.
I guess I will say more on the induction axioms in a reply to Samia.
Bruno
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.