On Monday 26 May 2008 09:55:14 am, Mark Waser wrote:
Josh,
Thank you very much for the pointers (and replying so rapidly).
You're welcome -- but also lucky; I read/reply to this list a bit sporadically
in general.
You're very right that people misinterpret and over-extrapolate econ
And again, *thank you* for a great pointer!
- Original Message -
From: J Storrs Hall, PhD [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 27, 2008 8:04 AM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity
Outcomes...]
On Monday 26 May 2008 09
Josh,
Thank you very much for the pointers (and replying so rapidly).
You're very right that people misinterpret and over-extrapolate econ and
game
theory, but when properly understood and applied, they are a valuable tool
for analyzing the forces shaping the further evolution of AGIs and
On Monday 26 May 2008 06:55:48 am, Mark Waser wrote:
The problem with accepted economics and game theory is that in a proper
scientific sense, they actually prove very little and certainly far, FAR
less than people extrapolate them to mean (or worse yet, prove).
Abusus non tollit usum.
- Original Message
From: Richard Loosemore [EMAIL PROTECTED]
Richard Loosemore said:
If you look at his paper carefully, you will see that at every step of
the way he introduces assumptions as if they were obvious facts ... and
in all the cases I have bothered to think through, these
The paper can be found at
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
Read the appendix, p37ff. He's not making arguments -- he's explaining, with a
few pointers into the literature, some parts of completely standard and
accepted economics and game
Sent: Sunday, May 25, 2008 6:26 AM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity
Outcomes...]
- Original Message
From: Richard Loosemore [EMAIL PROTECTED]
Richard Loosemore said:
If you look at his paper carefully, you will see that at every step
Driven Systems and AI Dangers [WAS Re: Singularity
Outcomes...]
The paper can be found at
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
Read the appendix, p37ff. He's not making arguments -- he's explaining,
with a
few pointers into the literature, some
, 2008 10:03 PM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity
Outcomes...]
I was sitting in the room when they were talking about it and I didn't
feel like speaking up at the time (why break my streak?) but I felt he
was just wrong. It seemed like you could boil
).
- Original Message -
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, May 24, 2008 10:18 PM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity
Outcomes...]
[EMAIL PROTECTED] wrote:
I was sitting in the room when they were
- Original Message
From: J Storrs Hall, PhD [EMAIL PROTECTED]
The paper can be found at
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
Read the appendix, p37ff. He's not making arguments -- he's explaining, with a
few pointers into the
enamored with/ensnared in his MES vision that he may well be violating his
own concerns about building complex systems.
- Original Message -
From: Jim Bromer
To: agi@v2.listbox.com
Sent: Sunday, May 25, 2008 2:22 PM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re
Jim Bromer wrote:
- Original Message
From: Richard Loosemore [EMAIL PROTECTED]
Richard Loosemore said:
If you look at his paper carefully, you will see that at every step of
the way he introduces assumptions as if they were obvious facts ... and
in all the cases I have bothered to
J Storrs Hall, PhD wrote:
The paper can be found at
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
Read the appendix, p37ff. He's not making arguments -- he's explaining, with a
few pointers into the literature, some parts of completely standard and
On Sunday 25 May 2008 10:06:11 am, Mark Waser wrote:
Read the appendix, p37ff. He's not making arguments -- he's explaining,
with a
few pointers into the literature, some parts of completely standard and
accepted economics and game theory. It's all very basic stuff.
The problem with
On Sunday 25 May 2008 07:51:59 pm, Richard Loosemore wrote:
This is NOT the paper that is under discussion.
WRONG.
This is the paper I'm discussing, and is therefore the paper under discussion.
---
agi
Archives:
In the context of Steve's paper, however, rational simply means an agent who
does not have a preference circularity.
On Sunday 25 May 2008 10:19:35 am, Mark Waser wrote:
Rationality and irrationality are interesting subjects . . . .
Many people who endlessly tout rationally use it as an
J Storrs Hall, PhD wrote:
On Sunday 25 May 2008 07:51:59 pm, Richard Loosemore wrote:
This is NOT the paper that is under discussion.
WRONG.
This is the paper I'm discussing, and is therefore the paper under discussion.
Josh, are you sure you're old enough to be using a computer without
regarding how you
believed an MES system was different from a system with a *large* number of
goal stacks.
- Original Message -
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, May 23, 2008 9:22 PM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re
On Saturday 24 May 2008 06:55:24 pm, Mark Waser wrote:
...Omuhundro's claim...
YES! But his argument is that to fulfill *any* motivation, there are
generic submotivations (protect myself, accumulate power, don't let my
motivation get perverted) that will further the search to fulfill your
Mark Waser wrote:
So if Omuhundro's claim rests on that fact that being self improving
is part of the AGI's makeup, and that this will cause the AGI to do
certain things, develop certain subgoals etc. I say that he has
quietly inserted a *motivation* (or rather assumed it: does he ever
say
I was sitting in the room when they were talking about it and I didn't
feel like speaking up at the time (why break my streak?) but I felt he
was just wrong. It seemed like you could boil the claim down to this:
If you are sufficiently advanced, and you have a goal and some
ability to
[EMAIL PROTECTED] wrote:
I was sitting in the room when they were talking about it and I didn't
feel like speaking up at the time (why break my streak?) but I felt he
was just wrong. It seemed like you could boil the claim down to this:
If you are sufficiently advanced, and you have a goal
J Storrs Hall, PhD wrote:
On Saturday 24 May 2008 06:55:24 pm, Mark Waser wrote:
...Omuhundro's claim...
YES! But his argument is that to fulfill *any* motivation, there are
generic submotivations (protect myself, accumulate power, don't let my
motivation get perverted) that will further
Kaj Sotala wrote:
Richard,
again, I must sincerely apologize for responding to this so
horrendously late. It's a dreadful bad habit of mine: I get an e-mail
(or blog comment, or forum message, or whatever) that requires some
thought before I respond, so I don't answer it right away... and
, May 23, 2008 2:13 PM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity
Outcomes...]
Kaj Sotala wrote:
Richard,
again, I must sincerely apologize for responding to this so horrendously
late. It's a dreadful bad habit of mine: I get an e-mail
(or blog comment
Mark Waser wrote:
he makes a direct reference to goal driven systems, but even more
important he declares that these bad behaviors will *not* be the result
of us programming the behaviors in at the start but in an MES
system nothing at all will happen unless the designer makes an explicit
Vladamir,
On 5/7/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
See http://www.overcomingbias.com/2008/01/newcombs-proble.html
This is a PERFECT talking point for the central point that I have been
trying to make. Belief in the Omega discussed early in that article is
essentially a religious
Steve,
I suspect I'll regret asking, but...
Does this rational belief make a difference to intelligence? (For the
moment confining the idea of intelligence to making good choices.)
If the AGI rationalized the existence of a higher power, what ultimate
bad choice do you see as a result?
On 5/7/08, Steve Richfield [EMAIL PROTECTED] wrote:
Story: I recently attended an SGI Buddhist meeting with a friend who was a
member there. After listening to their discussions, I asked if there was
anyone there (from ~30 people) who had ever found themselves in a position of
having to
Matt,
On 5/6/08, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Steve Richfield [EMAIL PROTECTED] wrote:
I have played tournament chess. However, when faced with a REALLY
GREAT
chess player (e.g. national champion), as I have had the pleasure of
on a
couple of occasions, they at first
Kaj,
On 5/6/08, Kaj Sotala [EMAIL PROTECTED] wrote:
Certainly a rational AGI may find it useful to appear irrational, but
that doesn't change the conclusion that it'll want to think rationally
at the bottom, does it?
The concept of rationality contains a large social component. For example,
On 5/7/08, Kaj Sotala [EMAIL PROTECTED] wrote:
Certainly a rational AGI may find it useful to appear irrational, but
that doesn't change the conclusion that it'll want to think rationally
at the bottom, does it?
Oh - and see also http://www.saunalahti.fi/~tspro1/reasons.html ,
especially
On Wed, May 7, 2008 at 11:14 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
On 5/6/08, Matt Mahoney [EMAIL PROTECTED] wrote:
As your example illustrates, a higher intelligence will appear to be
irrational, but you cannot conclude from this that irrationality
implies intelligence.
Neither
Steve Richfield wrote:
...
have played tournament chess. However, when faced with a REALLY GREAT
chess player (e.g. national champion), as I have had the pleasure of
on a couple of occasions, they at first appear to play as novices,
making unusual and apparently stupid moves that I can't
Kaj, Richard, et al,
On 5/5/08, Kaj Sotala [EMAIL PROTECTED] wrote:
Drive 2: AIs will want to be rational
This is basically just a special case of drive #1: rational agents
accomplish their goals better than irrational ones, and attempts at
self-improvement can be outright harmful if
--- Steve Richfield [EMAIL PROTECTED] wrote:
I have played tournament chess. However, when faced with a REALLY
GREAT
chess player (e.g. national champion), as I have had the pleasure of
on a
couple of occasions, they at first appear to play as novices, making
unusual
and apparently stupid
Richard,
again, I must sincerely apologize for responding to this so
horrendously late. It's a dreadful bad habit of mine: I get an e-mail
(or blog comment, or forum message, or whatever) that requires some
thought before I respond, so I don't answer it right away... and then
something related to
Charles D Hixson wrote:
Richard Loosemore wrote:
Kaj Sotala wrote:
On 3/3/08, Richard Loosemore [EMAIL PROTECTED] wrote:
...
goals.
But now I ask: what exactly does this mean?
In the context of a Goal Stack system, this would be represented by a
top level goal that was stated in the
On 3/3/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Kaj Sotala wrote:
Alright. But previously, you said that Omohundro's paper, which to me
seemed to be a general analysis of the behavior of *any* minds with
(more or less) explict goals, looked like it was based on a
'goal-stack'
list before believing
that my paper is anywhere close to final :-)
- Original Message -
From: Kaj Sotala [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, March 11, 2008 10:07 AM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity
Outcomes...]
On 3/3
Drive 1: AIs will want to self-improve
This one seems fairly straightforward: indeed, for humans
self-improvement seems to be an essential part in achieving pretty
much *any* goal you are not immeaditly capable of achieving. If you
don't know how to do something needed to achieve your goal,
Kaj Sotala wrote:
On 2/16/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Kaj Sotala wrote:
Well, the basic gist was this: you say that AGIs can't be constructed
with built-in goals, because a newborn AGI doesn't yet have built up
the concepts needed to represent the goal. Yet humans seem
On 2/16/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Kaj Sotala wrote:
Well, the basic gist was this: you say that AGIs can't be constructed
with built-in goals, because a newborn AGI doesn't yet have built up
the concepts needed to represent the goal. Yet humans seem tend to
have
]
To: agi@v2.listbox.com
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity
Outcomes...]
Date: Sun, 2 Mar 2008 19:58:28 +0200
On 2/16/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Kaj Sotala wrote:
Well, the basic gist was this: you say that AGIs can't
Gah, sorry for the awfully late response. Studies aren't leaving me
the energy to respond to e-mails more often than once in a blue
moon...
On Feb 4, 2008 8:49 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
They would not operate at the proposition level, so whatever
difficulties they have,
Kaj Sotala wrote:
Richard,
[Where's your blog? Oh, and this is a very useful discussion, as it's
given me material for a possible essay of my own as well. :-)]
It is in the process of being set up: I am currently wrestling with the
process of getting to know the newest version (just
On 1/30/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Kaj,
[This is just a preliminary answer: I am composing a full essay now,
which will appear in my blog. This is such a complex debate that it
needs to be unpacked in a lot more detail than is possible here. Richard].
Richard,
On Jan 29, 2008 6:52 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Okay, sorry to hit you with incomprehensible technical detail, but maybe
there is a chance that my garbled version of the real picture will
strike a chord.
The message to take home from all of this is that:
1) There are
Kaj Sotala wrote:
On Jan 29, 2008 6:52 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Okay, sorry to hit you with incomprehensible technical detail, but maybe
there is a chance that my garbled version of the real picture will
strike a chord.
The message to take home from all of this is that:
Kaj Sotala wrote:
On Jan 29, 2008 6:52 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Okay, sorry to hit you with incomprehensible technical detail, but maybe
there is a chance that my garbled version of the real picture will
strike a chord.
The message to take home from all of this is that:
On 1/29/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Summary of the difference:
1) I am not even convinced that an AI driven by a GS will ever actually
become generally intelligent, because of the self-contrdictions built
into the idea of a goal stack. I am fairly sure that whenever anyone
Kaj Sotala wrote:
On 1/29/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Summary of the difference:
1) I am not even convinced that an AI driven by a GS will ever actually
become generally intelligent, because of the self-contrdictions built
into the idea of a goal stack. I am fairly sure
53 matches
Mail list logo