On Saturday, February 22, 2020, at 11:31 AM, Stanley Nilsen wrote:
> Simply to say that a "goal" is the way
you determine what is best (e.g. does it "lead to" the goal ) is to
miss the point that goals need to constantly change when
circumstances change.
Instrumental goals or subgoals constantly change when circumstances change, in
order to serve the fulfillment of intrinsic, fundamental, top-level goals,
which do not change. If you can imagine changing one of your intrinsic goals
in order to achieve a "better" result, then it's not really an intrinsic goal.
There's some standard by which you determine what is "better," and meeting or
realizing that standard to the best of your ability is your real intrinsic goal.
On Saturday, February 22, 2020, at 11:31 AM, Stanley Nilsen wrote:
> I wouldn't say that I argue for "morality" but rather that there
needs to be a method to determine what is "better." It doesn't have
to be related to the Ten Commandments, just has to be a method of
evaluating what is best.
The whole concept of a non-arbitrary "better" is a moral thing. If you are
firmly convinced that one outcome is "better" than another under the
circumstances, and that anyone in those circumstances "should" strive for that
outcome, then you have a moral standard. It doesn't have to be explicitly
religious to count as morality.
On Saturday, February 22, 2020, at 11:31 AM, Stanley Nilsen wrote:
> General intelligence isn't about one goal, it's more about an agent
being useful in whatever circumstances it finds itself in.
Sooo, then, general intelligence is about the goal of being useful, however you
define "useful."
On Saturday, February 22, 2020, at 11:31 AM, Stanley Nilsen wrote:
> To argue that an agent has "general" intelligence and gets hung
up on a specific goal, is to say that the agent doesn't function
very well "in general."
If you're correct about that, then humans don't have general intelligence,
because we have a collection of specific goals. If our goal set seems
"general," it's only because you're not imagining the full possible landscape
of goals. Have you heard of the Ameglian Major Cows, from Douglas Adams'
stories? They have an intrinsic goal of being eaten by somebody, and will
offer themselves to other creatures as food at any opportunity. No human would
adopt such a goal, unless as an instrumental goal in service of some other goal
(e.g. if your family were in danger and getting eaten would save them somehow,
maybe you'd choose to get eaten, to serve the goal of protecting your family).
------------------------------------------
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tc67faac3048278cf-Md4bb259764dc8409309d1dc0
Delivery options: https://agi.topicbox.com/groups/agi/subscription