Mark,
In computer systems, searches are much cleaner so the backup search
functionality typically doesn't make sense.
..I entirely disagree... searches are not simple enough that you
can count on getting them right because of all of the following:
1. non-optimally specified goals
AGI should IMO focus on
a) figuring out how to reach given goals, instead of
b) trying to guess if users want something else than
what they actually asked for.
The b)
- could be specifically requested, but then it becomes a).
- could significantly impact performance
- (in order to work well) would require AGI to understand user's
preferences really really well, possibly even better than the user
himself. Going with some very general assumptions might not work well
because people prefer different things. E.g. some like the idea of
being converted to an extremely happy brain in a [safe] "jar", others
think it's a madness. Some would exchange the "standard love" for a
button on their head which, if pressed, would give them all kinds of
love related feelings (possibly many times stronger than the best ones
they ever had, some wouldn't prefer such optimization.
(if not un-intentionally or intentionally specified malevolent ones)
Except for some top-level users, [sub-]goal restrictions of course
apply, but it's problematic. What is unsafe to show sometimes depends
on the level of details (saying "make a bomb" is not the same as
saying "use this and that in such and such way to make a bomb").
Figuring out the safe level of detail is not always easy and another
problem is that smart users could break malevolent goals into separate
tasks so that [at least the first generation] AGIs wouldn't be able to
detect it even when following "your" emotion-related rules. The users
could be using multiple accounts so even if all those tasks are given
to a single instance of an AGI, it might not be able to notice the
master plan. So is it dangerous? Sure, it is.. But do we want to stop
making cars because car accidents keep killing many? Of course not.
AGI is potentially very powerful tool, but what we do with it is up to
us.
2. non-optimally stored and integrated knowledge
Then you want to fix the cause by optimizing & integrating instead of
"solving" symptoms by adding backup searches.
3. bad or insufficient knowledge
Can't prevent it.. GIGO..
4. search algorithms that break in unanticipated ways in unanticipated places
The fact is that it's nearly impossible to develop large bug-free
system. And as Brian Kernighan put it: "Debugging is twice as hard as
writing the code in the first place. Therefore, if you write the code
as cleverly as possible, you are, by definition, not smart enough to
debug it."
But again, you really want to fix the cause, not the symptoms.
Are you really sure you wish to rest the fate of the world on it?
No :). AGI(s) suggest solutions & people decide what to do.
integrity holes and conflicts in any system. Further, limitations on
computation power will cause even more since it simply won't be
possible to even finish a small percentage of the the clean-up that is
possible algorithmically.
The system may have many users who will be evaluating solutions they
requested. That will help with the clean-up + a lot can be IMO done to
support data-conflict auto-detection.
AGI can get much better than humans (early detection/clarification requests)
Not really. An AGI is going to be computation-bound. I think that
you're going to see much the same phenomena in AGIs as humans
.... it's going to be a limited entity is a messy
world.
Limited entity in a messy world - I agree with that, but the AGI
advantage is that it can dig through (and keep fixing) its data very
systematically. We cannot really do that. Our experience is charged
with feelings that work as indexes, optimizing the access to the info
learned in similar moods = good for performance, but sometimes sort of
forcing us to miss important links between concepts. Plus our active
memory is too limited, the long term memory doesn't work very well and
we deal with various (often emotion related) attention issues.
Having emotions does *NOT* make it any more likely that the AGI will
not stick with your commands..quite the contrary
As I think about the basic emotion list by Paul Ekman (anger, fear,
sadness, happiness, and disgust), I think it could. And, I personally,
would prefer to deal with AGIs that do not express those emotions. I
would rather prefer if it sometimes says something like "you don't
have sufficient rights to get the requested info". Security is
definitely important, but I still have lots of work to do on getting
the staff we want to later restrict so the restriction algorithms are
not the top priority for me at this point.
You review solutions, accept it if you like it. If you don't then
you update rules (and/or modify KB in other ways) preventing unwanted
and let AGI to re-think it.
what happens when you don't have time
Well, if a user wants a good solution but gets a "bad" one, it's in
his interest to say what's wrong with it.
As I mentioned before, my system learns from stories. It's very easy
to exclude a particular problem-causing story from the solution search
on user (or system) level. The stories are also linked to domains and
users can (but don't have to) pick in what domains should the system
look for solutions. E.g. if you include the "fairy tale" domain, the
generated solution is likely to contain some magic.
or the AI gets too smart for you
Good thing about well designed AGI is that it can keep explaining till
you get the important points. If you cannot grasp it, let the AGI
figure out how to improve you ;-)
or someone else gets ahold of it and modifies it in an
unsafe or even malevolent way?
I'm sure there will be attempts to hack powerful AGIs.. When someone
really gets into the system, it doesn't matter if you implemented
"emotions" or whatever.. The guy can do what he wants, but you can
make the system very hard to hack.
Emotions/feelings *are* effectively "a bunch of rules".
I then would not call it emotions when talking AGI
But they are very simplistic, low-level rules that are given
immediate sway over
much higher levels of the system and they are generally not built upon
in a logical fashion before doing so.
Everything should be IMO done in logical fashion so that the AGI could
always well explain solutions.
As such, they are "safer" in
one sense because they cannot be co-opted by bad logic -- and less
safe because they are so simplistic that they could be fooled by
complexity.
Those restrictions are problematic as I mentioned above.
Most human beings can talk themselves (logically) into
believing that killing a human is OK or even preferable in far more
circumstances than they can force their emotions to go along with it.
I see people having more luck with logic than with emotion based
decisions. We tend to see less when getting emotional.
I think that this is a *HUGE* indicator of how we should think when we
are considering building something as dangerous as an entity that will
eventually be more powerful than us.
More powerful problem solver - Sure.
The ultimate decision maker - I would not vote for that.
Sorry it took me a while to get back to you but (even though I don't
post to this AGI list much) I felt guilty of too much AGI talk and not
enough AGI work so I had to do something about it. :)
Regards,
Jiri Jelinek
On 5/3/07, Mark Waser <[EMAIL PROTECTED]> wrote:
Hi Jiri,
I think that we've basically gotten down to the core of where we differ
. . . .
>> It's basically just a non-trivial search function.
Yes, it's a non-trivial search function.
>> In computer systems, searches are much cleaner so the backup search
functionality typically doesn't make sense.
But I entirely disagree with this statement. I want to really, really
stress that I believe that the searches are not simple enough that you can
count on getting them right because of all of the following:
non-optimally specified goals (if not un-intentionally or intentionally
specified malevolent ones)
non-optimally stored and integrated knowledge
bad or insufficient knowledge
search algorithms that break in unanticipated ways in unanticipated places
>> Besides that, maintaining "many back-up systems" is a pain.
Yup, insurance is a pain -- but don't leave home without it.
>> It's easier to tweak single solution-search fn into perfection.
Easier? Undoubtedly. Guaranteed possible? I doubt it. Guaranteed success
on the first try? Are you really sure you wish to rest the fate of the
world on it?
>> You need to distinguish between:
>> a) internal conflicts (that's what I was referring to)
>> b) internal vs external conflicts (limited/invalid knowledge issues)
Oh. Trust me. I do make the distinction. What you may not realize or
agree with; however, is that internal conflicts are not only caused by
emotions. Limited and uncertain data will *always* cause integrity holes
and conflicts in any system. Further, limitations on computation power will
cause even more since it simply won't be possible to even finish a small
percentage of the the clean-up that is possible algorithmically.
>> For a) (at least), AGI can get much better than humans (early
detection/clarification requests, ..).
Not really. An AGI is going to be computation-bound. I think that you're
going to see much the same phenomena in AGIs as humans (i.e. it goes to use
some information and suddenly finds that it's got two different values based
upon how it's computed or what data sources it uses -- or worse, it doesn't
recognize that it has a conflict). The AGI is not going to be infinitely
smart in a pretty perfectly sensed world. Like I said, it's going to be a
limited entity is a messy world.
>> You just give it rules and it will stick with it (= easier than
controlling humans).
If your rules are correctly specified to the extent of handling all possible
solutions and generalize without any unexpected behavior AND the AGI always
correctly recognizes the situation . . . .
The AGI won't deliberately have goals that conflict yours (unlike humans)
but there are all sorts of ways that life can unexpectedly go awry.
Further, and very importantly to this debate -- Having emotions does *NOT*
make it any more likely that the AGI will not stick with your commands
(quite the contrary -- although anthropomorphism may make it *seem*
otherwise).
>> You review solutions, accept it if you like it. If you don't then you
update rules (and/or modify KB in other ways) preventing unwanted and let
AGI to re-think it.
OK. And what happens when you don't have time or the AI gets too smart for
you or someone else gets ahold of it and modifies it in an unsafe or even
malevolent way? When you're talking about one of the biggest existential
threats to humankind -- safeguards are a pretty good idea (even if they are
expensive).
>> we can control it + we review solutions - if not entirely then just
important aspects of it (like politicians working with various domain
experts).
I hate to do it but I should point you at the Singularity Institute and
their views of how easy and catastrophic the creation and loss of control
over an Unfriendly AI would be
(http://www.singinst.org/upload/CFAI.html).
>> Can you give me an example showing how "feelings implemented without
emotional investments" prevent a particular [sub-]goal that cannot be as
effectively prevented by a bunch of rules?
Emotions/feelings *are* effectively "a bunch of rules". But they are very
simplistic, low-level rules that are given immediate sway over much higher
levels of the system and they are generally not built upon in a logical
fashion before doing so. As such, they are "safer" in one sense because
they cannot be co-opted by bad logic -- and less safe because they are so
simplistic that they could be fooled by complexity.
Several good examples were in the article on the sources of human morality
-- Most human beings can talk themselves (logically) into believing that
killing a human is OK or even preferable in far more circumstances than they
can force their emotions to go along with it. I think that this is a *HUGE*
indicator of how we should think when we are considering building something
as dangerous as an entity that will eventually be more powerful than us.
Mark
----- Original Message -----
From: Jiri Jelinek
To: [email protected]
Sent: Thursday, May 03, 2007 1:11 PM
Subject: Re: [agi] Pure reason is a disease.
Mark,
>relying on the fact that you expect to be 100% successful initially and
therefore don't put as many back-up systems into place as possible is really
foolish and dangerous.
It's basically just a non-trivial search function. In human brain, searches
are dirty so back-up searches make sense. In computer systems, searches are
much cleaner so the backup search functionality typically doesn't make
sense. Besides that, maintaining "many back-up systems" is a pain. It's
easier to tweak single solution-search fn into perfection. For the "backup",
I prefer external solution, like some sort of "AGI chat" protocol so
different AGI solutions (and/or instances of the same AGI) with unique KB
could argue about the best solution.
>> See, you had a conflict in your mind . . . . but I don't think it needs
to be that way for AGI.
>I strongly disagree. An AGI is always going to be dealing with incomplete
and conflicting information.. expect a messy, ugly system
You need to distinguish between:
a) internal conflicts (that's what I was referring to)
b) internal vs external conflicts (limited/invalid knowledge issues)
For a) (at least), AGI can get much better than humans (early
detection/clarification requests, ..).
>system that is not going to be 100% controllable but which needs to have a
100% GUARANTEE that it will not go outside certain limits. This is eminently
do-able I do believe -- but not by simply relying on logic to create a world
model that is good enough to prevent it.
You just give it rules and it will stick with it (= easier than controlling
humans). You review solutions, accept it if you like it. If you don't then
you update rules (and/or modify KB in other ways) preventing unwanted and
let AGI to re-think it.
>Having backup systems (particularly ones that perform critical tasks) seems
like eminently *good* design to me. I think that is actually the crux of
our debate. I believe that emotions are a necessary backup to prevent
catastrophe. You believe (if I understand correctly -- and please correct
me if I'm wrong) that backup is not necessary
see above
>and that having emotions is more likely to precipitate catastrophe.
yes
>Unfriendly is this context merely means possessing a goal inimical to human
goals.
we can control it + we review solutions - if not entirely then just
important aspects of it (like politicians working with various domain
experts).
>An AI without feelings can certainly have goals inimical to human goals and
therefore be unfriendly (just not be emotionally invested in it :-)
Can you give me an example showing how "feelings implemented without
emotional investments" prevent a particular [sub-]goal that cannot be as
effectively prevented by a bunch of rules?
>So what is the mechanism that prioritizes sub-goals?
I rather prioritize collections of sub-goals (=solutions) and that's based
on the complexity of meeting the total number of selected sub-goals (one
solution vs another) while following given rules.
>It clearly must discriminate between the candidates. Doesn't that lead to a
result that could be called a preference?
My system doesn't prefer. It just solves stories, generating actions for
subjects that appear in those stories based on their
preferences/goals/profiles (with restrictions I mentioned previously).
Sincerely,
Jiri Jelinek
On 5/3/07, Mark Waser <[EMAIL PROTECTED]> wrote:
>
>
> >> believing that you can stop all other sources of high level goals is .
. . . simply incorrect.
> > IMO depends on design and on the nature & number of users involved.
> :-) Obviously. But my point is that relying on the fact that you expect
to be 100% successful initially and therefore don't put as many back-up
systems into place as possible is really foolish and dangerous. I don't
believe that simply removing emotions makes it any more likely to stop all
other sources of high level goals. Further, I believe that adding emotions
*can* be effective in helping prevent unwanted high level goals.
>
> > See, you had a conflict in your mind . . . . but I don't think it needs
to be that way for AGI.
>
> I strongly disagree. An AGI is always going to be dealing with incomplete
and conflicting information -- and, even if not, the computation required to
learn (and remove all conflicting partial assumptions generated from
learning) will take vastly more time than you're ever likely to get. You
need to expect a messy, ugly system that is not going to be 100%
controllable but which needs to have a 100% GUARANTEE that it will not go
outside certain limits. This is eminently do-able I do believe -- but not
by simply relying on logic to create a world model that is good enough to
prevent it.
>
> > Paul Ekman's list of emotions: anger, fear, sadness, happiness, disgust
>
> So what is the emotion that would prevent you from murdering someone if
you absolutely knew that you could get away with it?
>
> >>human beings have two clear and distinct sources of "morality" -- both
logical and emotional
> > poor design from my perspective..
> Why? Having backup systems (particularly ones that perform critical
tasks) seems like eminently *good* design to me. I think that is actually
the crux of our debate. I believe that emotions are a necessary backup to
prevent catastrophe. You believe (if I understand correctly -- and please
correct me if I'm wrong) that backup is not necessary and that having
emotions is more likely to precipitate catastrophe.
>
> >>I would strongly argue that an intelligence with well-designed feelings
is far, far more likely to stay Friendly than an intelligence without
feelings
> > AI without feelings (unlike its user) cannot really get unfriendly.
> Friendly is a bad choice of terms since it normally denotes an
emotion-linked state. Unfriendly is this context merely means possessing a
goal inimical to human goals. An AI without feelings can certainly have
goals inimical to human goals and therefore be unfriendly (just not be
emotionally invested in it :-)
>
> >>how giving a goal of "avoid x" is truly *different* from discomfort
> > It's the "do" vs "NEED to do".
> > Discomfort requires an extra sensor supporting the ability to prefer on
its own.
> So what is the mechanism that prioritizes sub-goals? It clearly must
discriminate between the candidates. Doesn't that lead to a result that
could be called a preference?
>
> Mark
>
> ----- Original Message -----
>
> From: Jiri Jelinek
> To: [email protected]
>
> Sent: Thursday, May 03, 2007 1:57 AM
> Subject: Re: [agi] Pure reason is a disease.
>
> Mark,
>
> >logic, when it relies upon single chain reasoning is relatively fragile.
And when it rests upon bad assumptions, it can be just a roadmap to
disaster.
>
> It all improves with learning. In my design (not implemented yet), AGI
learns from stories and (assuming it learned enough) can complete incomplete
stories.
>
> e.g:
> Story name: $tory
> [1] Mark has $0.
> [2] ..[to be generated by AGI]..
> [3] Mark has $1M.
>
> As the number of learned/solved stories grows, better/different solutions
can be generated.
>
> >I believe that it is very possible (nay, very probable) for an
"Artificial Program Solver" to end up with a goal that was not intended by
you.
>
> For emotion/feeling enabled AGI - possibly.
> For feeling-free AGI - only if it's buggy.
>
> Distinguish:
> a) given goals (e.g the [3]) and
> b) generated sub-goals.
>
> In my system, there is an admin feature that can restrict both for
lower-level users. Besides that, to control b), I go with subject-level and
story-level user-controlled profiles (inheritance supported). For example,
if Mark is linked to a "Life lover" profile that includes the "Never Kill"
rule, the sub-goal queries just exclude the Kill action. Rule breaking would
just cause invalid solutions nobody is interested in. I'm simplifying a bit,
but, bottom line - both a) & b) can be controlled/restricted.
>
> >believing that you can stop all other sources of high level goals is . .
. . simply incorrect.
>
> IMO depends on design and on the nature & number of users involved.
>
> >Now, look at how I reacted to your initial e-mail. My logic said "Cool!
Let's go implement this." My intuition/emotions said "Wait a minute.
There's something wonky here. Even if I can't put my finger on it, maybe
we'd better hold up until we can investigate this further". Now -- which
way would you like your Jupiter brain to react?
>
> See, you had a conflict in your mind. Our brains are sort of messed up. In
a single brain, we have more/less independent lines of thinking on multiple
levels combined with various data-visibility and thought-line-compare
issues. I know, lots of data to process - especially for real-time solutions
- so maybe the mother nature had to sacrifice conflict-free design for
faster thinking (after all, it more-less works), but I don't think it needs
to be that way for AGI. If one line of thought is well done, you don't have
conflicts and don't need the other (if well done, those would return the
same results).
>
> >Richard Loosemoore has suggested on this list that Friendliness could
also be implemented as a large number of loose constraints.
>
> I agree with that
>
> >I view emotions as sort of operating this way and, in part, serving this
purpose.
>
> Paul Ekman's list of emotions:
>
> * anger
> * fear
> * sadness
> * happiness
> * disgust
>
> When it comes to those emotions, AGI IMO just should be able to
learn/understand related behavior of various creatures. Nothing more or
less.
>
> >Further, recent brain research makes it quite clear that human beings
have two clear and distinct sources of "morality" -- both logical and
emotional
>
> poor design from my perspective..
>
> >I would strongly argue that an intelligence with well-designed feelings
is far, far more likely to stay Friendly than an intelligence without
feelings
>
> AI without feelings (unlike its user) cannot really get unfriendly.
> It's just a tool (like a knife).
>
> >how giving a goal of "avoid x" is truly *different* from discomfort
>
> It's the "do" vs "NEED to do".
> Discomfort requires an extra sensor supporting the ability to prefer on
its own.
>
> Jiri
>
>
>
>
> On 5/2/07, Mark Waser < [EMAIL PROTECTED]> wrote:
> >
> >
> > Hi Jiri,
> >
> > OK, I pondered it for a while and the answer is -- "failure modes".
> >
> > Your logic is correct. If I were willing take all of your
assumptions as always true, then I would agree with you. However, logic,
when it relies upon single chain reasoning is relatively fragile. And when
it rests upon bad assumptions, it can be just a roadmap to disaster.
> >
> > I believe that it is very possible (nay, very probable) for an
"Artificial Program Solver" to end up with a goal that was not intended by
you. This can happen in any number of ways from incorrect reasoning in an
imperfect world to robots rights activists deliberately programming
pro-robot goals into them. Your statement "Allowing other sources of high
level goals = potentially asking for conflicts." is undoubtedly true but
believing that you can stop all other sources of high level goals is . . . .
simply incorrect.
> >
> > Now, look at how I reacted to your initial e-mail. My logic said
"Cool! Let's go implement this." My intuition/emotions said "Wait a
minute. There's something wonky here. Even if I can't put my finger on it,
maybe we'd better hold up until we can investigate this further". Now --
which way would you like your Jupiter brain to react?
> >
> > Richard Loosemoore has suggested on this list that Friendliness
could also be implemented as a large number of loose constraints. I view
emotions as sort of operating this way and, in part, serving this purpose.
Further, recent brain research makes it quite clear that human beings have
two clear and distinct sources of "morality" -- both logical and emotional
(http://www.slate.com/id/2162998/pagenum/all/#page_start
). This is, in part, what I was thinking of when I listed "b) provide
pre-programmed constraints (for when logical reasoning doesn't have enough
information)" as one of the reasons why emotion was required.
> >
> > I would strongly argue that an intelligence with well-designed
feelings is far, far more likely to stay Friendly than an intelligence
without feelings -- and I would argue that there is substantial evidence for
this as well in our perception of and stories about "emotionless" people.
> >
> > Mark
> >
> > P.S. Great discussion. Thank you.
> >
> > ----- Original Message -----
> > From: Jiri Jelinek
> > To: [email protected]
> >
> > Sent: Tuesday, May 01, 2007 6:21 PM
> > Subject: Re: [agi] Pure reason is a disease.
> >
> > Mark,
> >
> > >I understand your point but have an emotional/ethical problem with it.
I'll have to ponder that for a while.
> >
> > Try to view our AI as an extension of our intelligence rather than
purely-its-own-kind.
> >
> >
> > >> For humans - yes, for our artificial problem solvers - emotion is a
disease.
> >
> > >What if the emotion is solely there to enforce our goals?
> > >Or maybe better ==> Not violate our constraints = comfortable, violate
our constraints = feel discomfort/sick/pain.
> > Intelligence is meaningless without discomfort. Unless your PC gets some
sort of "feel card", it cannot really prefer, cannot set goal(s), and cannot
have "hard feelings" about working extremely hard for you. You can a) spend
time figuring out how to build the card, build it, plug it in, and (with
potential risks) tune it to make it friendly enough so it will actually come
up with goals that are compatible enough with your goals *OR* b) you can
"simply" tell your "feeling-free" AI what problems you want it to work on.
Your choice.. I hope we are eventually not gonna end up asking the "b)"
solutions how to clean up a great mess caused by the "a)" solutions.
> >
> > Best,
> > Jiri Jelinek
> >
> >
> > On 5/1/07, Mark Waser <[EMAIL PROTECTED]> wrote:
> > >
> > >
> > > >> emotions.. to a) provide goals.. b) provide pre-programmed
constraints, and c) enforce urgency.
> > > > Our AI = our tool = should work for us = will get high level goals
(+ urgency info and constraints) from us. Allowing other sources of high
level goals = potentially asking for conflicts. > For sub-goals, AI can go
with reasoning.
> > >
> > > Hmmm. I understand your point but have an emotional/ethical problem
with it. I'll have to ponder that for a while.
> > >
> > > > For humans - yes, for our artificial problem solvers - emotion is a
disease.
> > >
> > > What if the emotion is solely there to enforce our goals? Fulfill our
goals = be happy, fail at our goals = be *very* sad. Or maybe better ==>
Not violate our constraints = comfortable, violate our constraints = feel
discomfort/sick/pain.
> > >
> > >
> > >
> > > ----- Original Message -----
> > > From: Jiri Jelinek
> > > To: [email protected]
> > >
> > > Sent: Tuesday, May 01, 2007 2:29 PM
> > > Subject: Re: [agi] Pure reason is a disease.
> > >
> > > >emotions.. to a) provide goals.. b) provide pre-programmed
constraints, and c) enforce urgency.
> > >
> > > Our AI = our tool = should work for us = will get high level goals (+
urgency info and constraints) from us. Allowing other sources of high level
goals = potentially asking for conflicts. For sub-goals, AI can go with
reasoning.
> > >
> > > >Pure reason is a disease
> > >
> > > For humans - yes, for our artificial problem solvers - emotion is a
disease.
> > >
> > > Jiri Jelinek
> > >
> > >
> > >
> > > On 5/1/07, Mark Waser < [EMAIL PROTECTED]> wrote:
> > > >
> > > >
> > > >
> > > > >> My point, in that essay, is that the nature of human emotions is
rooted in the human brain architecture,
> > > >
> > > > I'll agree that human emotions are rooted in human brain
architecture but there is also the question -- is there something analogous
to emotion which is generally necessary for *effective* intelligence? My
answer is a qualified but definite yes since emotion clearly serves a number
of purposes that apparently aren't otherwise served (in our brains) by our
pure logical reasoning mechanisms (although, potentially, there may be
something else that serves those purposes equally well). In particular,
emotions seem necessary (in humans) to a) provide goals, b) provide
pre-programmed constraints (for when logical reasoning doesn't have enough
information), and c) enforce urgency.
> > > >
> > > > Without looking at these things that emotions provide, I'm not
sure that you can create an *effective* general intelligence (since these
roles need to be filled by *something*).
> > > >
> > > > >> Because of the difference mentioned in the prior paragraph, the
rigid distinction between emotion and reason that exists in the human brain
will not exist in a well-design AI.
> > > >
> > > > Which is exactly why I was arguing that emotions and reason (or
feeling and thinking) were a spectrum rather than a dichotomy.
> > > >
> > > >
> > > >
> > > > ----- Original Message -----
> > > > From: Benjamin Goertzel
> > > > To: [email protected]
> > > > Sent: Tuesday, May 01, 2007 1:05 PM
> > > > Subject: Re: [agi] Pure reason is a disease.
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On 5/1/07, Mark Waser <[EMAIL PROTECTED] > wrote:
> > > > >
> > > > >
> > > > > >> Well, this tells you something interesting about the human
cognitive architecture, but not too much about intelligence in general...
> > > > >
> > > > > How do you know that it doesn't tell you much about intelligence
in general? That was an incredibly dismissive statement. Can you justify
it?
> > > >
> > > >
> > > > Well I tried to in the essay that I pointed to in my response.
> > > >
> > > > My point, in that essay, is that the nature of human emotions is
rooted in the human brain architecture, according to which our systemic
physiological responses to cognitive phenomena ("emotions") are rooted in
primitive parts of the brain that we don't have much conscious introspection
into. So, we actually can't reason about the intermediate conclusions that
go into our emotional reactions very easily, because the "conscious,
reasoning" parts of our brains don't have the ability to look into the
intermediate results stored and manipulated within the more primitive
"emotionally reacting" parts of the brain. So our deliberative
consciousness has choice of either
> > > >
> > > > -- accepting not-very-thoroughly-analyzable outputs from the
emotional parts of the brain
> > > >
> > > > or
> > > >
> > > > -- rejecting them
> > > >
> > > > and doesn't have the choice to focus deliberative attention on the
intermediate steps used by the emotional brain to arrive at its conclusions.
> > > >
> > > > Of course, through years of practice one can learn to bring more and
more of the emotional brain's operations into the scope of conscious
deliberation, but one can never do this completely due to the structure of
the human brain.
> > > >
> > > > On the other hand, an AI need not have the same restrictions. An AI
should be able to introspect into the intermediary conclusions and
manipulations used to arrive at its "feeling responses". Yes there are
restrictions on the amount of introspection possible, imposed by
computational resource limitations; but this is different than the blatant
and severe architectural restrictions imposed by the design of the human
brain.
> > > >
> > > > Because of the difference mentioned in the prior paragraph, the
rigid distinction between emotion and reason that exists in the human brain
will not exist in a well-design AI.
> > > >
> > > > Sorry for not giving references regarding my analysis of the human
cognitive/neural system -- I have read them but don't have the reference
list at hand. Some (but not a thorough list) are given in the article I
referenced before.
> > > >
> > > > -- Ben G
> > > > ________________________________
This list is sponsored by AGIRI: http://www.agiri.org/email
> > > > To unsubscribe or change your options, please go to:
> > > > http://v2.listbox.com/member/?& ________________________________
This list is sponsored by AGIRI: http://www.agiri.org/email
> > > >
> > > > To unsubscribe or change your options, please go to:
> > > > http://v2.listbox.com/member/?&
> > >
> > > ________________________________
This list is sponsored by AGIRI: http://www.agiri.org/email
> > > To unsubscribe or change your options, please go to:
> > > http://v2.listbox.com/member/?& ________________________________
This list is sponsored by AGIRI: http://www.agiri.org/email
> > > To unsubscribe or change your options, please go to:
> > > http://v2.listbox.com/member/?&
> >
> > ________________________________
This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&
> > ________________________________
This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&
>
> ________________________________
This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&
> ________________________________
This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&
________________________________
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
________________________________
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936