Mark Waser wrote:
...
The simulator is needing to run large populations over large numbers
of generations multiple times with slightly different assumptions.
As such, it doesn't speak directly to What is a good strategy for an
advanced AI with lots of resources?, but it provides indications.
It's better in the sense of more clearly analogous, but it's worse
because 1) it's harder to analyze and 2) the results are *MUCH* more
equivocal. I'd argue that religion has caused more general suffering than
it has ameliorated. Probably by several orders of magnitude.
I agree. Why do you
Mike Dougherty wrote:
On Wed, Mar 12, 2008 at 8:54 PM, Charles D Hixson
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
I think that you need to look into the simulations that have been run
involving Evolutionarily Stable Strategies. Friendly covers many
strategies, including (I
And it's a very *good* strategy. But it's not optimal except in certain
constrained situations. Note that all the strategies that I listed were
VERY simple strategies. Tit-for-tat was better than any of them, but it
requires more memory and the remembered recognition of individuals. As
Mark Waser wrote:
The trouble with not stepping on other's goals unless absolutely
necessary is that it relies on mind-reading. The goals of others are
often opaque and not easily verbalizable even if they think to.
The trouble with */ the optimal implementation of /* not stepping on
I *think* you are assuming that both sides are friendly. If one side is
a person, or group of people, then this is definitely not guaranteed.
I'll grant all your points if both sides are friendly, and each knows
the other to be friendly. Otherwise I think things get messier. So
Mark Waser wrote:
...
= = = = = = = = = =
Play the game by *assuming* that you are a Friendly and asking
yourself what you would do to protect yourself without breaking your
declaration of Friendliness. It's fun and addictive and hopefully
will lead you to declaring Friendliness yourself.
On Wed, Mar 12, 2008 at 8:54 PM, Charles D Hixson
[EMAIL PROTECTED] wrote:
I think that you need to look into the simulations that have been run
involving Evolutionarily Stable Strategies. Friendly covers many
strategies, including (I think) Dove and Retaliator. Retaliator is
almost an
On 10/03/2008, Mark Waser [EMAIL PROTECTED] wrote:
Do you think that any of this contradicts what I've written thus far? I
don't immediately see any contradictions.
The discussions seem to entirely ignore the role of socialization
in human and animal friendliness. We are a large collection
The discussions seem to entirely ignore the role of socialization
in human and animal friendliness. We are a large collection of
autonomous agents that are well-matched in skills and abilities.
If we were unfriendly to one another, we might survive as a species,
but we would not live in cities
Pesky premature e-mail problem . . .
The discussions seem to entirely ignore the role of socialization
in human and animal friendliness. We are a large collection of
autonomous agents that are well-matched in skills and abilities.
If we were unfriendly to one another, we might survive as a
Mark Waser wrote:
If the motives depend on satisficing, and the questing for
unlimited fulfillment is avoided, then this limits the danger. The
universe won't be converted into toothpicks, if a part of setting the
goal for toothpicks! is limiting the quantity of toothpicks.
(Limiting it
I find myself totally bemused by the recent discussion of AGI friendliness.
I am in sympathy with some aspects of Mark's position, but I also see a
serious problem running through the whole debate: everyone is making
statements based on unstated assumptions about the motivations of AGI
The three most common of these assumptions are:
1) That it will have the same motivations as humans, but with a
tendency toward the worst that we show.
2) That it will have some kind of Gotta Optimize My Utility
Function motivation.
3) That it will have an intrinsic urge to
On Mon, Mar 10, 2008 at 5:47 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
In the past I have argued strenuously that (a) you cannot divorce a
discussion of friendliness from a discussion of what design of AGI you
are talking about, and (b) some assumptions about AGI motivation are
I am in sympathy with some aspects of Mark's position, but I also see a
serious problem running through the whole debate: everyone is making
statements based on unstated assumptions about the motivations of AGI
systems.
Bummer. I thought that I had been clearer about my assumptions. Let me
For instance, a Novamente-based AGI will have an explicit utility
function, but only a percentage of the system's activity will be directly
oriented toward fulfilling this utility function
Some of the system's activity will be spontaneous ... i.e. only
implicitly goal-oriented .. and as such
Mark Waser wrote:
I am in sympathy with some aspects of Mark's position, but I also see
a serious problem running through the whole debate: everyone is
making statements based on unstated assumptions about the motivations
of AGI systems.
Bummer. I thought that I had been clearer about my
First off -- yours was a really helpful post. Thank you!
I think that I need to add a word to my initial assumption . . . .
Assumption - The AGI will be an optimizing goal-seeking entity.
There are two main things.
One is that the statement The AGI will be a goal-seeking entity has
many
Mark Waser wrote:
...
The motivation that is in the system is I want to achieve *my* goals.
The goals that are in the system I deem to be entirely irrelevant
UNLESS they are deliberately and directly contrary to Friendliness. I
am contending that, unless the initial goals are deliberately
I think here we need to consider A. Maslow's hierarchy of needs. That an
AGI won't have the same needs as a human is, I suppose, obvious, but I
think it's still true that it will have a hierarchy (which isn't
strictly a hierarchy). I.e., it will have a large set of motives, and
which it is
21 matches
Mail list logo