Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-09-04 Thread Valentina Poletti
That sounds like a useful purpose. Yeh, I don't believe in fast and quick
methods either.. but also humans tend to overestimate their own
capabilities, so it will probably take more time than predicted.

On 9/3/08, William Pearson [EMAIL PROTECTED] wrote:

 2008/8/28 Valentina Poletti [EMAIL PROTECTED]:
  Got ya, thanks for the clarification. That brings up another question.
 Why
  do we want to make an AGI?
 
 

 To understand ourselves as intelligent agents better? It might enable
 us to have decent education policy, rehabilitation of criminals.

 Even if we don't make human like AGIs the principles should help us
 understand ourselves, just as optics of the lens helped us understand
 the eye and aerodynamics of wings helps us understand bird flight.

 It could also gives us more leverage, more brain power on the planet
 to help solve the planets problems.

 This is all predicated on the idea that fast take off is pretty much
 impossible. It is possible then all bets are off.

 Will


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-09-03 Thread Valentina Poletti
So it's about money then.. now THAT makes me feel less worried!! :)

That explains a lot though.

On 8/28/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Valentina Poletti [EMAIL PROTECTED] wrote:
  Got ya, thanks for the clarification. That brings up another question.
 Why do we want to make an AGI?

 I'm glad somebody is finally asking the right question, instead of skipping
 over the specification to the design phase. It would avoid a lot of
 philosophical discussions that result from people having different ideas of
 what AGI should do.

 AGI could replace all human labor, worth about US $2 to $5 quadrillion over
 the next 30 years. We should expect the cost to be of this magnitude, given
 that having it sooner is better than waiting.

 I think AGI will be immensely complex, on the order of 10^18 bits,
 decentralized, competitive, with distributed ownership, like today's
 internet but smarter. It will converse with you fluently but know too much
 to pass the Turing test. We will be totally dependent on it.

 -- Matt Mahoney, [EMAIL PROTECTED]






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-09-03 Thread William Pearson
2008/8/28 Valentina Poletti [EMAIL PROTECTED]:
 Got ya, thanks for the clarification. That brings up another question. Why
 do we want to make an AGI?



To understand ourselves as intelligent agents better? It might enable
us to have decent education policy, rehabilitation of criminals.

Even if we don't make human like AGIs the principles should help us
understand ourselves, just as optics of the lens helped us understand
the eye and aerodynamics of wings helps us understand bird flight.

It could also gives us more leverage, more brain power on the planet
to help solve the planets problems.

This is all predicated on the idea that fast take off is pretty much
impossible. It is possible then all bets are off.

 Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Mark Waser

Hi Terren,

Obviously you need to complicated your original statement I believe that 
ethics is *entirely* driven by what is best evolutionarily... in such a 
way that we don't derive ethics from parasites.


Saying that ethics is entirely driven by evolution is NOT the same as saying 
that evolution always results in ethics.  Ethics is 
computationally/cognitively expensive to successfully implement (because a 
stupid implementation gets exploited to death).  There are many evolutionary 
niches that won't support that expense and the successful entities in those 
niches won't be ethical.  Parasites are a prototypical/archetypal example of 
such a niche since they tend to degeneratively streamlined to the point of 
being stripped down to virtually nothing except that which is necessary for 
their parasitism.  Effectively, they are single goal entities -- the single 
most dangerous type of entity possible.



You did that by invoking social behavior - parasites are not social beings


I claim that ethics is nothing *but* social behavior.

So from there you need to identify how evolution operates in social groups 
in such a way that you can derive ethics.


OK.  How about this . . . . Ethics is that behavior that, when shown by you, 
makes me believe that I should facilitate your survival.  Obviously, it is 
then to your (evolutionary) benefit to behave ethically.


As Matt alluded to before, would you agree that ethics is the result of 
group selection? In other words, that human collectives with certain 
taboos make the group as a whole more likely to persist?


Matt is decades out of date and needs to catch up on his reading.

Ethics is *NOT* the result of group selection.  The *ethical evaluation of a 
given action* is a meme and driven by the same social/group forces as any 
other meme.  Rational memes when adopted by a group can enhance group 
survival but . . . . there are also mechanisms by which seemingly irrational 
memes can also enhance survival indirectly in *exactly* the same fashion as 
the seemingly irrational tail displays of peacocks facilitates their group 
survival by identifying the fittest individuals.  Note that it all depends 
upon circumstances . . . .


Ethics is first and foremost what society wants you to do.  But, society 
can't be too pushy in it's demands or individuals will defect and society 
will break down.  So, ethics turns into a matter of determining what is the 
behavior that is best for society (and thus the individual) without unduly 
burdening the individual (which would promote defection, cheating, etc.). 
This behavior clearly differs based upon circumstances but, equally clearly, 
should be able to be derived from a reasonably small set of rules that 
*will* be context dependent.  Marc Hauser has done a lot of research and 
human morality seems to be designed exactly that way (in terms of how it 
varies across societies as if it is based upon fairly simple rules with a 
small number of variables/variable settings.  I highly recommend his 
writings (and being familiar with them is pretty much a necessity if you 
want to have a decent advanced/current scientific discussion of ethics and 
morals).


   Mark

- Original Message - 
From: Terren Suydam [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 10:54 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))





Hi Mark,

Obviously you need to complicated your original statement I believe that 
ethics is *entirely* driven by what is best evolutionarily... in such a 
way that we don't derive ethics from parasites. You did that by invoking 
social behavior - parasites are not social beings.


So from there you need to identify how evolution operates in social groups 
in such a way that you can derive ethics. As Matt alluded to before, would 
you agree that ethics is the result of group selection? In other words, 
that human collectives with certain taboos make the group as a whole more 
likely to persist?


Terren


--- On Thu, 8/28/08, Mark Waser [EMAIL PROTECTED] wrote:


From: Mark Waser [EMAIL PROTECTED]
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI 
(was Re: [agi] The Necessity of Embodiment))

To: agi@v2.listbox.com
Date: Thursday, August 28, 2008, 9:21 PM
Parasites are very successful at surviving but they
don't have other
goals.  Try being parasitic *and* succeeding at goals other
than survival.
I think you'll find that your parasitic ways will
rapidly get in the way of
your other goals the second that you need help (or even
non-interference)
from others.

- Original Message - 
From: Terren Suydam [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 5:03 PM
Subject: Re: AGI goals (was Re: Information theoretic
approaches to AGI (was
Re: [agi] The Necessity of Embodiment))



 --- On Thu, 8/28/08, Mark Waser
[EMAIL PROTECTED] wrote:
 Actually, I *do* 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Terren Suydam

--- On Fri, 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
 Saying that ethics is entirely driven by evolution is NOT
 the same as saying 
 that evolution always results in ethics.  Ethics is 
 computationally/cognitively expensive to successfully
 implement (because a 
 stupid implementation gets exploited to death).  There are
 many evolutionary 
 niches that won't support that expense and the
 successful entities in those 
 niches won't be ethical.  Parasites are a
 prototypical/archetypal example of 
 such a niche since they tend to degeneratively streamlined
 to the point of 
 being stripped down to virtually nothing except that which
 is necessary for 
 their parasitism.  Effectively, they are single goal
 entities -- the single 
 most dangerous type of entity possible.

Works for me. Just wanted to point out that saying ethics is entirely driven 
by evolution is not enough to communicate with precision what you mean by that.
 
 OK.  How about this . . . . Ethics is that behavior that,
 when shown by you, 
 makes me believe that I should facilitate your survival. 
 Obviously, it is 
 then to your (evolutionary) benefit to behave ethically.

Ethics can't be explained simply by examining interactions between individuals. 
It's an emergent dynamic that requires explanation at the group level. It's a 
set of culture-wide rules and taboos - how did they get there?
 
 Matt is decades out of date and needs to catch up on his
 reading.

Really? I must be out of date too then, since I agree with his explanation of 
ethics. I haven't read Hauser yet though, so maybe you're right.
 
 Ethics is *NOT* the result of group selection.  The
 *ethical evaluation of a 
 given action* is a meme and driven by the same social/group
 forces as any 
 other meme.  Rational memes when adopted by a group can
 enhance group 
 survival but . . . . there are also mechanisms by which
 seemingly irrational 
 memes can also enhance survival indirectly in *exactly* the
 same fashion as 
 the seemingly irrational tail displays of
 peacocks facilitates their group 
 survival by identifying the fittest individuals.  Note that
 it all depends 
 upon circumstances . . . .
 
 Ethics is first and foremost what society wants you to do. 
 But, society 
 can't be too pushy in it's demands or individuals
 will defect and society 
 will break down.  So, ethics turns into a matter of
 determining what is the 
 behavior that is best for society (and thus the individual)
 without unduly 
 burdening the individual (which would promote defection,
 cheating, etc.). 
 This behavior clearly differs based upon circumstances but,
 equally clearly, 
 should be able to be derived from a reasonably small set of
 rules that 
 *will* be context dependent.  Marc Hauser has done a lot of
 research and 
 human morality seems to be designed exactly that way (in
 terms of how it 
 varies across societies as if it is based upon fairly
 simple rules with a 
 small number of variables/variable settings.  I highly
 recommend his 
 writings (and being familiar with them is pretty much a
 necessity if you 
 want to have a decent advanced/current scientific
 discussion of ethics and 
 morals).
 
 Mark

I fail to see how your above explanation is anything but an elaboration of the 
idea that ethics is due to group selection. The following statements all 
support it: 
 - memes [rational or otherwise] when adopted by a group can enhance group 
survival
 - Ethics is first and foremost what society wants you to do.
 - ethics turns into a matter of determining what is the behavior that is best 
for society

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Mark Waser

OK.  How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your survival.
Obviously, it is
then to your (evolutionary) benefit to behave ethically.


Ethics can't be explained simply by examining interactions between 
individuals. It's an emergent dynamic that requires explanation at the 
group level. It's a set of culture-wide rules and taboos - how did they 
get there?


I wasn't explaining ethics with that statement.  I was identifying how 
evolution operates in social groups in such a way that I can derive ethics 
(in direct response to your question).


Ethics is a system.  The *definition of ethical behavior* for a given group 
is an emergent dynamic that requires explanation at the group level 
because it includes what the group believes and values -- but ethics (the 
system) does not require belief history (except insofar as it affects 
current belief).  History, circumstances, and understanding what a culture 
has the rules and taboos that they have is certainly useful for deriving 
more effective rules and taboos -- but it doesn't alter the underlying 
system which is quite simple . . . . being perceived as helpful generally 
improves your survival chances, being perceived as harmful generally 
decreases your survival chances (unless you are able to overpower the 
effect).


Really? I must be out of date too then, since I agree with his explanation 
of ethics. I haven't read Hauser yet though, so maybe you're right.


The specific phrase you cited was human collectives with certain taboos 
make the group as a whole more likely to persist.  The correct term of art 
for this is group selection and it has pretty much *NOT* been supported by 
scientific evidence and has fallen out of favor.


Matt also tends to conflate a number of ideas which should be separate which 
you seem to be doing as well.  There need to be distinctions between ethical 
systems, ethical rules, cultural variables, and evaluations of ethical 
behavior within a specific cultural context (i.e. the results of the system 
given certain rules -- which at the first-level seem to be reasonably 
standard -- with certain cultural variables as input).  Hauser's work 
identifies some of the common first-level rules and how cultural variables 
affect the results of those rules (and the derivation of secondary rules). 
It's good detailed, experiment-based stuff rather than the vague hand-waving 
that you're getting from armchair philosophers.


I fail to see how your above explanation is anything but an elaboration of 
the idea that ethics is due to group selection. The following statements 
all support it:
- memes [rational or otherwise] when adopted by a group can enhance group 
survival

- Ethics is first and foremost what society wants you to do.
- ethics turns into a matter of determining what is the behavior that is 
best for society


I think we're stumbling over your use of the term group selection  and 
what you mean by ethics is due to group selection.  Yes, the group 
selects the cultural variables that affect the results of the common 
ethical rules.  But group selection as a term of art in evolution 
generally meaning that the group itself is being selected or co-evolved --  
in this case, presumably by ethics -- which is *NOT* correct by current 
scientific understanding.  The first phrase that you quoted was intended to 
point out that both good and bad memes can positively affect survival -- so 
co-evolution doesn't work.  The second phrase that you quoted deals with the 
results of the system applying common ethical rules with cultural variables. 
The third phrase that you quoted talks about determining what the best 
cultural variables (and maybe secondary rules) are for a given set of 
circumstances -- and should have been better phrased as Improving ethical 
evaluations turns into a matter of determining . . . 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Eric Burton
I remember Richard Dawkins saying that group selection is a lie. Maybe
we shoud look past it now? It seems like a problem.

On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
 OK.  How about this . . . . Ethics is that behavior that,
 when shown by you,
 makes me believe that I should facilitate your survival.
 Obviously, it is
 then to your (evolutionary) benefit to behave ethically.

 Ethics can't be explained simply by examining interactions between
 individuals. It's an emergent dynamic that requires explanation at the
 group level. It's a set of culture-wide rules and taboos - how did they
 get there?

 I wasn't explaining ethics with that statement.  I was identifying how
 evolution operates in social groups in such a way that I can derive ethics
 (in direct response to your question).

 Ethics is a system.  The *definition of ethical behavior* for a given group
 is an emergent dynamic that requires explanation at the group level
 because it includes what the group believes and values -- but ethics (the
 system) does not require belief history (except insofar as it affects
 current belief).  History, circumstances, and understanding what a culture
 has the rules and taboos that they have is certainly useful for deriving
 more effective rules and taboos -- but it doesn't alter the underlying
 system which is quite simple . . . . being perceived as helpful generally
 improves your survival chances, being perceived as harmful generally
 decreases your survival chances (unless you are able to overpower the
 effect).

 Really? I must be out of date too then, since I agree with his explanation

 of ethics. I haven't read Hauser yet though, so maybe you're right.

 The specific phrase you cited was human collectives with certain taboos
 make the group as a whole more likely to persist.  The correct term of art
 for this is group selection and it has pretty much *NOT* been supported by
 scientific evidence and has fallen out of favor.

 Matt also tends to conflate a number of ideas which should be separate which
 you seem to be doing as well.  There need to be distinctions between ethical
 systems, ethical rules, cultural variables, and evaluations of ethical
 behavior within a specific cultural context (i.e. the results of the system
 given certain rules -- which at the first-level seem to be reasonably
 standard -- with certain cultural variables as input).  Hauser's work
 identifies some of the common first-level rules and how cultural variables
 affect the results of those rules (and the derivation of secondary rules).
 It's good detailed, experiment-based stuff rather than the vague hand-waving
 that you're getting from armchair philosophers.

 I fail to see how your above explanation is anything but an elaboration of

 the idea that ethics is due to group selection. The following statements
 all support it:
 - memes [rational or otherwise] when adopted by a group can enhance group

 survival
 - Ethics is first and foremost what society wants you to do.
 - ethics turns into a matter of determining what is the behavior that is
 best for society

 I think we're stumbling over your use of the term group selection  and
 what you mean by ethics is due to group selection.  Yes, the group
 selects the cultural variables that affect the results of the common
 ethical rules.  But group selection as a term of art in evolution
 generally meaning that the group itself is being selected or co-evolved --
 in this case, presumably by ethics -- which is *NOT* correct by current
 scientific understanding.  The first phrase that you quoted was intended to
 point out that both good and bad memes can positively affect survival -- so
 co-evolution doesn't work.  The second phrase that you quoted deals with the
 results of the system applying common ethical rules with cultural variables.
 The third phrase that you quoted talks about determining what the best
 cultural variables (and maybe secondary rules) are for a given set of
 circumstances -- and should have been better phrased as Improving ethical
 evaluations turns into a matter of determining . . . 




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Mark Waser
Group selection (as used as the term of art in evolutionary biology) does 
not seem to be experimentally supported (and there have been a lot of recent 
experiments looking for such an effect).


It would be nice if people could let the idea drop unless there is actually 
some proof for it other than it seems to make sense that . . . . 


- Original Message - 
From: Eric Burton [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, August 29, 2008 12:56 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




I remember Richard Dawkins saying that group selection is a lie. Maybe
we shoud look past it now? It seems like a problem.

On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:

OK.  How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your survival.
Obviously, it is
then to your (evolutionary) benefit to behave ethically.


Ethics can't be explained simply by examining interactions between
individuals. It's an emergent dynamic that requires explanation at the
group level. It's a set of culture-wide rules and taboos - how did they
get there?


I wasn't explaining ethics with that statement.  I was identifying how
evolution operates in social groups in such a way that I can derive 
ethics

(in direct response to your question).

Ethics is a system.  The *definition of ethical behavior* for a given 
group

is an emergent dynamic that requires explanation at the group level
because it includes what the group believes and values -- but ethics (the
system) does not require belief history (except insofar as it affects
current belief).  History, circumstances, and understanding what a 
culture

has the rules and taboos that they have is certainly useful for deriving
more effective rules and taboos -- but it doesn't alter the underlying
system which is quite simple . . . . being perceived as helpful generally
improves your survival chances, being perceived as harmful generally
decreases your survival chances (unless you are able to overpower the
effect).

Really? I must be out of date too then, since I agree with his 
explanation


of ethics. I haven't read Hauser yet though, so maybe you're right.


The specific phrase you cited was human collectives with certain taboos
make the group as a whole more likely to persist.  The correct term of 
art
for this is group selection and it has pretty much *NOT* been supported 
by

scientific evidence and has fallen out of favor.

Matt also tends to conflate a number of ideas which should be separate 
which
you seem to be doing as well.  There need to be distinctions between 
ethical

systems, ethical rules, cultural variables, and evaluations of ethical
behavior within a specific cultural context (i.e. the results of the 
system

given certain rules -- which at the first-level seem to be reasonably
standard -- with certain cultural variables as input).  Hauser's work
identifies some of the common first-level rules and how cultural 
variables
affect the results of those rules (and the derivation of secondary 
rules).
It's good detailed, experiment-based stuff rather than the vague 
hand-waving

that you're getting from armchair philosophers.

I fail to see how your above explanation is anything but an elaboration 
of


the idea that ethics is due to group selection. The following statements
all support it:
- memes [rational or otherwise] when adopted by a group can enhance 
group


survival
- Ethics is first and foremost what society wants you to do.
- ethics turns into a matter of determining what is the behavior that 
is

best for society


I think we're stumbling over your use of the term group selection  and
what you mean by ethics is due to group selection.  Yes, the group
selects the cultural variables that affect the results of the common
ethical rules.  But group selection as a term of art in evolution
generally meaning that the group itself is being selected or 
co-evolved --

in this case, presumably by ethics -- which is *NOT* correct by current
scientific understanding.  The first phrase that you quoted was intended 
to
point out that both good and bad memes can positively affect survival --  
so
co-evolution doesn't work.  The second phrase that you quoted deals with 
the
results of the system applying common ethical rules with cultural 
variables.

The third phrase that you quoted talks about determining what the best
cultural variables (and maybe secondary rules) are for a given set of
circumstances -- and should have been better phrased as Improving 
ethical

evaluations turns into a matter of determining . . . 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com





Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Charles Hixson
Dawkins tends to see an truth, and then overstate it.  What he says 
isn't usually exactly wrong, so much as one-sided.  This may be an 
exception.


Some meanings of group selection don't appear to map onto reality.  
Others map very weakly.  Some have reasonable explanatory power.  If you 
don't define with precision which meaning you are using, then you invite 
confusion.  As such, it's a term that it's better not to use.


But I wouldn't usually call it a lie.  Merely a mistake.  The exact 
nature of the mistake depend on precisely what you mean, and the context 
within which you are using it.  Often it's merely a signal that you are 
confused and don't KNOW precisely what you are talking about, but merely 
the general ball park within which you believe it lies.  Only rarely is 
it intentionally used to confuse things with malice intended.  In that 
final case the term lie is appropriate.  Otherwise it's merely 
inadvisable usage.


Eric Burton wrote:

I remember Richard Dawkins saying that group selection is a lie. Maybe
we shoud look past it now? It seems like a problem.

On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
  

OK.  How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your survival.
Obviously, it is
then to your (evolutionary) benefit to behave ethically.


Ethics can't be explained simply by examining interactions between
individuals. It's an emergent dynamic that requires explanation at the
group level. It's a set of culture-wide rules and taboos - how did they
get there?
  

I wasn't explaining ethics with that statement.  I was identifying how
evolution operates in social groups in such a way that I can derive ethics
(in direct response to your question).

Ethics is a system.  The *definition of ethical behavior* for a given group
is an emergent dynamic that requires explanation at the group level
because it includes what the group believes and values -- but ethics (the
system) does not require belief history (except insofar as it affects
current belief).  History, circumstances, and understanding what a culture
has the rules and taboos that they have is certainly useful for deriving
more effective rules and taboos -- but it doesn't alter the underlying
system which is quite simple . . . . being perceived as helpful generally
improves your survival chances, being perceived as harmful generally
decreases your survival chances (unless you are able to overpower the
effect).



Really? I must be out of date too then, since I agree with his explanation

of ethics. I haven't read Hauser yet though, so maybe you're right.
  

The specific phrase you cited was human collectives with certain taboos
make the group as a whole more likely to persist.  The correct term of art
for this is group selection and it has pretty much *NOT* been supported by
scientific evidence and has fallen out of favor.

Matt also tends to conflate a number of ideas which should be separate which
you seem to be doing as well.  There need to be distinctions between ethical
systems, ethical rules, cultural variables, and evaluations of ethical
behavior within a specific cultural context (i.e. the results of the system
given certain rules -- which at the first-level seem to be reasonably
standard -- with certain cultural variables as input).  Hauser's work
identifies some of the common first-level rules and how cultural variables
affect the results of those rules (and the derivation of secondary rules).
It's good detailed, experiment-based stuff rather than the vague hand-waving
that you're getting from armchair philosophers.



I fail to see how your above explanation is anything but an elaboration of

the idea that ethics is due to group selection. The following statements
all support it:
- memes [rational or otherwise] when adopted by a group can enhance group

survival
- Ethics is first and foremost what society wants you to do.
- ethics turns into a matter of determining what is the behavior that is
best for society
  

I think we're stumbling over your use of the term group selection  and
what you mean by ethics is due to group selection.  Yes, the group
selects the cultural variables that affect the results of the common
ethical rules.  But group selection as a term of art in evolution
generally meaning that the group itself is being selected or co-evolved --
in this case, presumably by ethics -- which is *NOT* correct by current
scientific understanding.  The first phrase that you quoted was intended to
point out that both good and bad memes can positively affect survival -- so
co-evolution doesn't work.  The second phrase that you quoted deals with the
results of the system applying common ethical rules with cultural variables.
The third phrase that you quoted talks about determining what the best
cultural variables (and maybe secondary rules) are for a given set of
circumstances -- and should have been better phrased as 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Matt Mahoney
Group selection is not dead, just weaker than individual selection. Altruism in 
many species is evidence for its existence. 
http://en.wikipedia.org/wiki/Group_selection

In any case, evolution of culture and ethics in humans is primarily memetic, 
not genetic. Taboos against nudity are nearly universal among cultures with 
language, yet unique to homo sapiens.

You might believe that certain practices are intrinsically good or bad, not the 
result of group selection. Fine. That is how your beliefs are supposed to work.

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, August 29, 2008 1:13:43 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))

Group selection (as used as the term of art in evolutionary biology) does 
not seem to be experimentally supported (and there have been a lot of recent 
experiments looking for such an effect).

It would be nice if people could let the idea drop unless there is actually 
some proof for it other than it seems to make sense that . . . . 

- Original Message - 
From: Eric Burton [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, August 29, 2008 12:56 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))


I remember Richard Dawkins saying that group selection is a lie. Maybe
 we shoud look past it now? It seems like a problem.

 On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
 OK.  How about this . . . . Ethics is that behavior that,
 when shown by you,
 makes me believe that I should facilitate your survival.
 Obviously, it is
 then to your (evolutionary) benefit to behave ethically.

 Ethics can't be explained simply by examining interactions between
 individuals. It's an emergent dynamic that requires explanation at the
 group level. It's a set of culture-wide rules and taboos - how did they
 get there?

 I wasn't explaining ethics with that statement.  I was identifying how
 evolution operates in social groups in such a way that I can derive 
 ethics
 (in direct response to your question).

 Ethics is a system.  The *definition of ethical behavior* for a given 
 group
 is an emergent dynamic that requires explanation at the group level
 because it includes what the group believes and values -- but ethics (the
 system) does not require belief history (except insofar as it affects
 current belief).  History, circumstances, and understanding what a 
 culture
 has the rules and taboos that they have is certainly useful for deriving
 more effective rules and taboos -- but it doesn't alter the underlying
 system which is quite simple . . . . being perceived as helpful generally
 improves your survival chances, being perceived as harmful generally
 decreases your survival chances (unless you are able to overpower the
 effect).

 Really? I must be out of date too then, since I agree with his 
 explanation

 of ethics. I haven't read Hauser yet though, so maybe you're right.

 The specific phrase you cited was human collectives with certain taboos
 make the group as a whole more likely to persist.  The correct term of 
 art
 for this is group selection and it has pretty much *NOT* been supported 
 by
 scientific evidence and has fallen out of favor.

 Matt also tends to conflate a number of ideas which should be separate 
 which
 you seem to be doing as well.  There need to be distinctions between 
 ethical
 systems, ethical rules, cultural variables, and evaluations of ethical
 behavior within a specific cultural context (i.e. the results of the 
 system
 given certain rules -- which at the first-level seem to be reasonably
 standard -- with certain cultural variables as input).  Hauser's work
 identifies some of the common first-level rules and how cultural 
 variables
 affect the results of those rules (and the derivation of secondary 
 rules).
 It's good detailed, experiment-based stuff rather than the vague 
 hand-waving
 that you're getting from armchair philosophers.

 I fail to see how your above explanation is anything but an elaboration 
 of

 the idea that ethics is due to group selection. The following statements
 all support it:
 - memes [rational or otherwise] when adopted by a group can enhance 
 group

 survival
 - Ethics is first and foremost what society wants you to do.
 - ethics turns into a matter of determining what is the behavior that 
 is
 best for society

 I think we're stumbling over your use of the term group selection  and
 what you mean by ethics is due to group selection.  Yes, the group
 selects the cultural variables that affect the results of the common
 ethical rules.  But group selection as a term of art in evolution
 generally meaning that the group itself is being selected or 
 co-evolved --
 in this case, presumably by ethics -- which is *NOT* correct by current
 scientific understanding.  

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Valentina Poletti
Got ya, thanks for the clarification. That brings up another question. Why
do we want to make an AGI?



On 8/27/08, Matt Mahoney [EMAIL PROTECTED] wrote:

  An AGI will not design its goals. It is up to humans to define the goals
 of an AGI, so that it will do what we want it to do.

 Unfortunately, this is a problem. We may or may not be successful in
 programming the goals of AGI to satisfy human goals. If we are not
 successful, then AGI will be useless at best and dangerous at worst. If we
 are successful, then we are doomed because human goals evolved in a
 primitive environment to maximize reproductive success and not in an
 environment where advanced technology can give us whatever we want. AGI will
 allow us to connect our brains to simulated worlds with magic genies, or
 worse, allow us to directly reprogram our brains to alter our memories,
 goals, and thought processes. All rational goal-seeking agents must have a
 mental state of maximum utility where any thought or perception would be
 unpleasant because it would result in a different state.

 -- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser
No, the state of ultimate bliss that you, I, and all other rational, goal 
seeking agents seek


Your second statement copied below not withstanding, I *don't* seek ultimate 
bliss.


You may say that is not what you want, but only because you are unaware of 
the possibilities of reprogramming your brain. It is like being opposed to 
drugs or wireheading. Once you experience it, you can't resist.


It is not what I want *NOW*.  It may be that once my brain has been altered 
by experiencing it, I may want it *THEN* but that has no relevance to what I 
want and seek now.


These statements are just sloppy reasoning . . . .


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 11:05 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))




Mark Waser [EMAIL PROTECTED] wrote:


What if the utility of the state decreases the longer that you are in it
(something that is *very* true of human

beings)?
If you are aware of the passage of time, then you are not staying in the
same state.


I have to laugh.  So you agree that all your arguments don't apply to
anything that is aware of the passage of time?  That makes them really
useful, doesn't it.


No, the state of ultimate bliss that you, I, and all other rational, goal 
seeking agents seek is a mental state in which nothing perceptible 
happens. Without thought or sensation, you would be unaware of the passage 
of time, or of anything else. If you are aware of time then you are either 
not in this state yet, or are leaving it.


You may say that is not what you want, but only because you are unaware of 
the possibilities of reprogramming your brain. It is like being opposed to 
drugs or wireheading. Once you experience it, you can't resist.


-- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Abram Demski
Mark,

I second that!

Matt,

This is like my imaginary robot that rewires its video feed to be
nothing but tan, to stimulate the pleasure drive that humans put there
to make it like humans better.

If we have any external goals at all, the state of bliss you refer to
prevents us from achieving them. Knowing this, we do not want to enter
that state.

--Abram Demski

On Thu, Aug 28, 2008 at 9:18 AM, Mark Waser [EMAIL PROTECTED] wrote:
 No, the state of ultimate bliss that you, I, and all other rational, goal
 seeking agents seek

 Your second statement copied below not withstanding, I *don't* seek ultimate
 bliss.

 You may say that is not what you want, but only because you are unaware of
 the possibilities of reprogramming your brain. It is like being opposed to
 drugs or wireheading. Once you experience it, you can't resist.

 It is not what I want *NOW*.  It may be that once my brain has been altered
 by experiencing it, I may want it *THEN* but that has no relevance to what I
 want and seek now.

 These statements are just sloppy reasoning . . . .


 - Original Message - From: Matt Mahoney [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Wednesday, August 27, 2008 11:05 PM
 Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was
 Re: [agi] The Necessity of Embodiment))


 Mark Waser [EMAIL PROTECTED] wrote:

 What if the utility of the state decreases the longer that you are in
 it
 (something that is *very* true of human

 beings)?
 If you are aware of the passage of time, then you are not staying in the
 same state.

 I have to laugh.  So you agree that all your arguments don't apply to
 anything that is aware of the passage of time?  That makes them really
 useful, doesn't it.

 No, the state of ultimate bliss that you, I, and all other rational, goal
 seeking agents seek is a mental state in which nothing perceptible happens.
 Without thought or sensation, you would be unaware of the passage of time,
 or of anything else. If you are aware of time then you are either not in
 this state yet, or are leaving it.

 You may say that is not what you want, but only because you are unaware of
 the possibilities of reprogramming your brain. It is like being opposed to
 drugs or wireheading. Once you experience it, you can't resist.

 -- Matt Mahoney, [EMAIL PROTECTED]


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser

Also, I should mention that the whole construction becomes irrelevant
if we can logically describe the goal ahead of time. With the make
humans happy example, something like my construction would be useful
if we need to AI to *learn* what a human is and what happy is. (We
then set up the pleasure in a way that would help the AI attach
goodness to the right things.) If we are able to write out the
definitions ahead of time, we can directly specify what goodness is
instead. But, I think it is unrealistic to take that approach, since
the definitions would be large and difficult


:-)  I strongly disagree with you.  Why do you believe that having a new AI 
learn large and difficult definitions is going to be easier and safer than 
specifying them (assuming that the specifications can be grounded in the 
AI's terms)?


I also disagree that the definitions are going to be as large as people 
believe them to be . . . .


Let's take the Mandelbroit set as an example.  It is perfectly specified by 
one *very* small formula.  Yet, if you don't know that formula, you could 
spend many lifetimes characterizing it (particularly if you're trying to 
doing it from multiple blurred and shifted  images :-).


The true problem is that humans can't (yet) agree on what goodness is -- and 
then they get lost arguing over detailed cases instead of focusing on the 
core.


The core definition of goodness/morality and developing a system to 
determine what actions are good and what actions are not is a project that 
I've been working on for quite some time and I *think* I'm making rather 
good headway.



- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 9:57 AM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Hi mark,

I think the miscommunication is relatively simple...

On Wed, Aug 27, 2008 at 10:14 PM, Mark Waser [EMAIL PROTECTED] wrote:

Hi,

  I think that I'm missing some of your points . . . .


Whatever good is, it cannot be something directly
observable, or the AI will just wirehead itself (assuming it gets
intelligent enough to do so, of course).


I don't understand this unless you mean by directly observable that the
definition is observable and changeable.  If I define good as making all
humans happy without modifying them, how would the AI wirehead itself? 
What

am I missing here?


When I say directly observable, I mean observable-by-sensation.
Making all humans happy could not be directly observed unless the AI
had sensors in the pleasure centers of all humans (in which case it
would want to wirehead us). Without modifying them couldn't be
directly observed even then. So, realistically, such a goal needs to
be inferred from sensory data.

Also, I should mention that the whole construction becomes irrelevant
if we can logically describe the goal ahead of time. With the make
humans happy example, something like my construction would be useful
if we need to AI to *learn* what a human is and what happy is. (We
then set up the pleasure in a way that would help the AI attach
goodness to the right things.) If we are able to write out the
definitions ahead of time, we can directly specify what goodness is
instead. But, I think it is unrealistic to take that approach, since
the definitions would be large and difficult




So, the AI needs to have a concept of external goodness, with a weak
probabilistic correlation to its directly observable pleasure.


I agree with the concept of external goodness but why does the 
correlation

between external goodness and it's pleasure have to be low?  Why can't
external goodness directly cause pleasure?  Clearly, it shouldn't believe
that it's pleasure causes external goodness (that would be reversing 
cause

and effect and an obvious logic error).


The correlation needs to be fairly low to allow the concept of good to
eventually split off of the concept of pleasure in the AI mind. The
external goodness can't directly cause pleasure because it isn't
directly detectable. Detection of goodness *through* inference *could*
be taken to cause pleasure; but this wouldn't be much use, because the
AI is already supposed to be maximizing goodness, not pleasure.
Pleasure merely plays the role of offering hints about what things
in the world might be good.

Actually, I think the proper probabilistic construction might be a bit
different than simply a weak correlation... for one thing, the
probability that goodness causes pleasure shouldn't be set ahead of
time. I'm thinking that likelihood would be more appropriate than
probability... so that it is as if the AI is born with some evidence
for the correlation that it cannot remember, but uses in reasoning (if
you are familiar with the idea of virtual evidence that is what I am
talking about).



  Mark

P.S.  I notice that several others answered your wirehead query so I 
won't


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Abram Demski
Mark,

Actually I am sympathetic with this idea. I do think good can be
defined. And, I think it can be a simple definition. However, it
doesn't seem right to me to preprogram an AGI with a set ethical
theory; the theory could be wrong, no matter how good it sounds. So,
better to put such ideas in only as probabilistic correlations (or
virtual evidence), and let the system change its beliefs based on
accumulated evidence. I do not think this is overly risky, because
whatever the system comes to believe, its high-level goal will tend to
create normalizing subgoals that will regularize its behavior.

I'll stick to my point about defining make humans happy being hard,
though. Especially with the restriction without modifying them that
you used.

On Thu, Aug 28, 2008 at 12:38 PM, Mark Waser [EMAIL PROTECTED] wrote:
 Also, I should mention that the whole construction becomes irrelevant
 if we can logically describe the goal ahead of time. With the make
 humans happy example, something like my construction would be useful
 if we need to AI to *learn* what a human is and what happy is. (We
 then set up the pleasure in a way that would help the AI attach
 goodness to the right things.) If we are able to write out the
 definitions ahead of time, we can directly specify what goodness is
 instead. But, I think it is unrealistic to take that approach, since
 the definitions would be large and difficult

 :-)  I strongly disagree with you.  Why do you believe that having a new AI
 learn large and difficult definitions is going to be easier and safer than
 specifying them (assuming that the specifications can be grounded in the
 AI's terms)?

 I also disagree that the definitions are going to be as large as people
 believe them to be . . . .

 Let's take the Mandelbroit set as an example.  It is perfectly specified by
 one *very* small formula.  Yet, if you don't know that formula, you could
 spend many lifetimes characterizing it (particularly if you're trying to
 doing it from multiple blurred and shifted  images :-).

 The true problem is that humans can't (yet) agree on what goodness is -- and
 then they get lost arguing over detailed cases instead of focusing on the
 core.

 The core definition of goodness/morality and developing a system to
 determine what actions are good and what actions are not is a project that
 I've been working on for quite some time and I *think* I'm making rather
 good headway.


 - Original Message - From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Thursday, August 28, 2008 9:57 AM
 Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
 AGI (was Re: [agi] The Necessity of Embodiment))


 Hi mark,

 I think the miscommunication is relatively simple...

 On Wed, Aug 27, 2008 at 10:14 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Hi,

  I think that I'm missing some of your points . . . .

 Whatever good is, it cannot be something directly
 observable, or the AI will just wirehead itself (assuming it gets
 intelligent enough to do so, of course).

 I don't understand this unless you mean by directly observable that the
 definition is observable and changeable.  If I define good as making all
 humans happy without modifying them, how would the AI wirehead itself?
 What
 am I missing here?

 When I say directly observable, I mean observable-by-sensation.
 Making all humans happy could not be directly observed unless the AI
 had sensors in the pleasure centers of all humans (in which case it
 would want to wirehead us). Without modifying them couldn't be
 directly observed even then. So, realistically, such a goal needs to
 be inferred from sensory data.

 Also, I should mention that the whole construction becomes irrelevant
 if we can logically describe the goal ahead of time. With the make
 humans happy example, something like my construction would be useful
 if we need to AI to *learn* what a human is and what happy is. (We
 then set up the pleasure in a way that would help the AI attach
 goodness to the right things.) If we are able to write out the
 definitions ahead of time, we can directly specify what goodness is
 instead. But, I think it is unrealistic to take that approach, since
 the definitions would be large and difficult


 So, the AI needs to have a concept of external goodness, with a weak
 probabilistic correlation to its directly observable pleasure.

 I agree with the concept of external goodness but why does the
 correlation
 between external goodness and it's pleasure have to be low?  Why can't
 external goodness directly cause pleasure?  Clearly, it shouldn't believe
 that it's pleasure causes external goodness (that would be reversing
 cause
 and effect and an obvious logic error).

 The correlation needs to be fairly low to allow the concept of good to
 eventually split off of the concept of pleasure in the AI mind. The
 external goodness can't directly cause pleasure because it isn't
 directly detectable. Detection 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser

However, it
doesn't seem right to me to preprogram an AGI with a set ethical
theory; the theory could be wrong, no matter how good it sounds.


Why not wait until a theory is derived before making this decision?

Wouldn't such a theory be a good starting point, at least?


better to put such ideas in only as probabilistic correlations (or
virtual evidence), and let the system change its beliefs based on
accumulated evidence. I do not think this is overly risky, because
whatever the system comes to believe, its high-level goal will tend to
create normalizing subgoals that will regularize its behavior.


You're getting into implementation here but I will make a couple of personal 
belief statements:


1.  Probabilistic correlations are much, *much* more problematical than most 
people are event willing to think about.  They work well with very simple 
examples but they do not scale well at all.  Particularly problematic for 
such correlations is the fact that ethical concepts are generally made up 
*many* interwoven parts and are very fuzzy.  The church of Bayes does not 
cut it for any work where the language/terms/concepts are not perfectly 
crisp, clear, and logically correct.
2.  Statements like its high-level goal will tend to create normalizing 
subgoals that will regularize its behavior sweep *a lot* of detail under 
the rug.  It's possible that it is true.  I think that it is much more 
probable that it is very frequently not true.  Unless you do *a lot* of 
specification, I'm afraid that expecting this to be true is *very* risky.



I'll stick to my point about defining make humans happy being hard,
though. Especially with the restriction without modifying them that
you used.


I think that defining make humans happy is impossible -- but that's OK 
because I think that it's a really bad goal to try to implement.


All I need to do is to define learn, harm, and help.  Help could be defined 
as anything which is agreed to with informed consent by the affected subject 
both before and after the fact.  Yes, that doesn't cover all actions but 
that just means that the AI doesn't necessarily have a strong inclination 
towards those actions.  Harm could be defined as anything which is disagreed 
with (or is expected to be disagreed with) by the affected subject either 
before or after the fact.  Friendliness then turns into something like 
asking permission.  Yes, the Friendly entity won't save you in many 
circumstances, but it's not likely to kill you either.


 Of course, I could also come up with the counter-argument to my own 
thesis that the AI will never do anything because there will always be 
someone who objects to the AI doing *anything* to change the world.-- but 
that's just the absurdity and self-defeating arguments that I expect from 
many of the list denizens that can't be defended against except by 
allocating far more time than it's worth.




- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 1:59 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Mark,

Actually I am sympathetic with this idea. I do think good can be
defined. And, I think it can be a simple definition. However, it
doesn't seem right to me to preprogram an AGI with a set ethical
theory; the theory could be wrong, no matter how good it sounds. So,
better to put such ideas in only as probabilistic correlations (or
virtual evidence), and let the system change its beliefs based on
accumulated evidence. I do not think this is overly risky, because
whatever the system comes to believe, its high-level goal will tend to
create normalizing subgoals that will regularize its behavior.

I'll stick to my point about defining make humans happy being hard,
though. Especially with the restriction without modifying them that
you used.

On Thu, Aug 28, 2008 at 12:38 PM, Mark Waser [EMAIL PROTECTED] wrote:

Also, I should mention that the whole construction becomes irrelevant
if we can logically describe the goal ahead of time. With the make
humans happy example, something like my construction would be useful
if we need to AI to *learn* what a human is and what happy is. (We
then set up the pleasure in a way that would help the AI attach
goodness to the right things.) If we are able to write out the
definitions ahead of time, we can directly specify what goodness is
instead. But, I think it is unrealistic to take that approach, since
the definitions would be large and difficult


:-)  I strongly disagree with you.  Why do you believe that having a new 
AI
learn large and difficult definitions is going to be easier and safer 
than

specifying them (assuming that the specifications can be grounded in the
AI's terms)?

I also disagree that the definitions are going to be as large as people
believe them to be . . . .

Let's take the Mandelbroit set as an example.  It is perfectly 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Abram Demski
Mark,

I still think your definitions still sound difficult to implement,
although not nearly as hard as make humans happy without modifying
them. How would you define consent? You'd need a definition of
decision-making entity, right?

Personally, if I were to take the approach of a preprogrammed ethics,
I would define good in pseudo-evolutionary terms: a pattern/entity is
good if it has high survival value in the long term. Patterns that are
self-sustaining on their own are thus considered good, but patterns
that help sustain other patterns would be too, because they are a
high-utility part of a larger whole.

Actually, that idea is what made me assert that any goal produces
normalizing subgoals. Survivability helps achieve any goal, as long as
it isn't a time-bounded goal (finishing a set task).

--Abram

On Thu, Aug 28, 2008 at 2:52 PM, Mark Waser [EMAIL PROTECTED] wrote:
 However, it
 doesn't seem right to me to preprogram an AGI with a set ethical
 theory; the theory could be wrong, no matter how good it sounds.

 Why not wait until a theory is derived before making this decision?

 Wouldn't such a theory be a good starting point, at least?

 better to put such ideas in only as probabilistic correlations (or
 virtual evidence), and let the system change its beliefs based on
 accumulated evidence. I do not think this is overly risky, because
 whatever the system comes to believe, its high-level goal will tend to
 create normalizing subgoals that will regularize its behavior.

 You're getting into implementation here but I will make a couple of personal
 belief statements:

 1.  Probabilistic correlations are much, *much* more problematical than most
 people are event willing to think about.  They work well with very simple
 examples but they do not scale well at all.  Particularly problematic for
 such correlations is the fact that ethical concepts are generally made up
 *many* interwoven parts and are very fuzzy.  The church of Bayes does not
 cut it for any work where the language/terms/concepts are not perfectly
 crisp, clear, and logically correct.
 2.  Statements like its high-level goal will tend to create normalizing
 subgoals that will regularize its behavior sweep *a lot* of detail under
 the rug.  It's possible that it is true.  I think that it is much more
 probable that it is very frequently not true.  Unless you do *a lot* of
 specification, I'm afraid that expecting this to be true is *very* risky.

 I'll stick to my point about defining make humans happy being hard,
 though. Especially with the restriction without modifying them that
 you used.

 I think that defining make humans happy is impossible -- but that's OK
 because I think that it's a really bad goal to try to implement.

 All I need to do is to define learn, harm, and help.  Help could be defined
 as anything which is agreed to with informed consent by the affected subject
 both before and after the fact.  Yes, that doesn't cover all actions but
 that just means that the AI doesn't necessarily have a strong inclination
 towards those actions.  Harm could be defined as anything which is disagreed
 with (or is expected to be disagreed with) by the affected subject either
 before or after the fact.  Friendliness then turns into something like
 asking permission.  Yes, the Friendly entity won't save you in many
 circumstances, but it's not likely to kill you either.

  Of course, I could also come up with the counter-argument to my own
 thesis that the AI will never do anything because there will always be
 someone who objects to the AI doing *anything* to change the world.-- but
 that's just the absurdity and self-defeating arguments that I expect from
 many of the list denizens that can't be defended against except by
 allocating far more time than it's worth.



 - Original Message - From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Thursday, August 28, 2008 1:59 PM
 Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
 AGI (was Re: [agi] The Necessity of Embodiment))


 Mark,

 Actually I am sympathetic with this idea. I do think good can be
 defined. And, I think it can be a simple definition. However, it
 doesn't seem right to me to preprogram an AGI with a set ethical
 theory; the theory could be wrong, no matter how good it sounds. So,
 better to put such ideas in only as probabilistic correlations (or
 virtual evidence), and let the system change its beliefs based on
 accumulated evidence. I do not think this is overly risky, because
 whatever the system comes to believe, its high-level goal will tend to
 create normalizing subgoals that will regularize its behavior.

 I'll stick to my point about defining make humans happy being hard,
 though. Especially with the restriction without modifying them that
 you used.

 On Thu, Aug 28, 2008 at 12:38 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Also, I should mention that the whole construction becomes irrelevant
 if we can 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser

Personally, if I were to take the approach of a preprogrammed ethics,
I would define good in pseudo-evolutionary terms: a pattern/entity is
good if it has high survival value in the long term. Patterns that are
self-sustaining on their own are thus considered good, but patterns
that help sustain other patterns would be too, because they are a
high-utility part of a larger whole.


Actually, I *do* define good and ethics not only in evolutionary terms but 
as being driven by evolution.  Unlike most people, I believe that ethics is 
*entirely* driven by what is best evolutionarily while not believing at all 
in red in tooth and claw.  I can give you a reading list that shows that 
the latter view is horribly outdated among people who keep up with the 
research rather than just rehashing tired old ideas.



Actually, that idea is what made me assert that any goal produces
normalizing subgoals. Survivability helps achieve any goal, as long as
it isn't a time-bounded goal (finishing a set task).


Ah, I'm starting to get an idea of what you mean behind normalizing subgoals 
. . . .   Yes, absolutely except that I contend that there is exactly one 
normalizing subgoal (though some might phrase it as two) that is normally 
common to virtually every goal (except in very extreme/unusual 
circumstances).



- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 4:04 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Mark,

I still think your definitions still sound difficult to implement,
although not nearly as hard as make humans happy without modifying
them. How would you define consent? You'd need a definition of
decision-making entity, right?

Personally, if I were to take the approach of a preprogrammed ethics,
I would define good in pseudo-evolutionary terms: a pattern/entity is
good if it has high survival value in the long term. Patterns that are
self-sustaining on their own are thus considered good, but patterns
that help sustain other patterns would be too, because they are a
high-utility part of a larger whole.

Actually, that idea is what made me assert that any goal produces
normalizing subgoals. Survivability helps achieve any goal, as long as
it isn't a time-bounded goal (finishing a set task).

--Abram

On Thu, Aug 28, 2008 at 2:52 PM, Mark Waser [EMAIL PROTECTED] wrote:

However, it
doesn't seem right to me to preprogram an AGI with a set ethical
theory; the theory could be wrong, no matter how good it sounds.


Why not wait until a theory is derived before making this decision?

Wouldn't such a theory be a good starting point, at least?


better to put such ideas in only as probabilistic correlations (or
virtual evidence), and let the system change its beliefs based on
accumulated evidence. I do not think this is overly risky, because
whatever the system comes to believe, its high-level goal will tend to
create normalizing subgoals that will regularize its behavior.


You're getting into implementation here but I will make a couple of 
personal

belief statements:

1.  Probabilistic correlations are much, *much* more problematical than 
most

people are event willing to think about.  They work well with very simple
examples but they do not scale well at all.  Particularly problematic for
such correlations is the fact that ethical concepts are generally made up
*many* interwoven parts and are very fuzzy.  The church of Bayes does not
cut it for any work where the language/terms/concepts are not perfectly
crisp, clear, and logically correct.
2.  Statements like its high-level goal will tend to create normalizing
subgoals that will regularize its behavior sweep *a lot* of detail under
the rug.  It's possible that it is true.  I think that it is much more
probable that it is very frequently not true.  Unless you do *a lot* of
specification, I'm afraid that expecting this to be true is *very* risky.


I'll stick to my point about defining make humans happy being hard,
though. Especially with the restriction without modifying them that
you used.


I think that defining make humans happy is impossible -- but that's OK
because I think that it's a really bad goal to try to implement.

All I need to do is to define learn, harm, and help.  Help could be 
defined
as anything which is agreed to with informed consent by the affected 
subject

both before and after the fact.  Yes, that doesn't cover all actions but
that just means that the AI doesn't necessarily have a strong inclination
towards those actions.  Harm could be defined as anything which is 
disagreed

with (or is expected to be disagreed with) by the affected subject either
before or after the fact.  Friendliness then turns into something like
asking permission.  Yes, the Friendly entity won't save you in many
circumstances, but it's not likely to kill you either.

 Of course, I could also come up 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Terren Suydam

--- On Thu, 8/28/08, Mark Waser [EMAIL PROTECTED] wrote:
 Actually, I *do* define good and ethics not only in
 evolutionary terms but 
 as being driven by evolution.  Unlike most people, I
 believe that ethics is 
 *entirely* driven by what is best evolutionarily while not
 believing at all 
 in red in tooth and claw.  I can give you a
 reading list that shows that 
 the latter view is horribly outdated among people who keep
 up with the 
 research rather than just rehashing tired old ideas.

I think it's a stretch to derive ethical ideas from what you refer to as best 
evolutionarily.  Parasites are pretty freaking successful, from an 
evolutionary point of view, but nobody would say parasitism is ethical.

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Matt Mahoney
Valentina Poletti [EMAIL PROTECTED] wrote:
 Got ya, thanks for the clarification. That brings up another question. Why do 
 we want to make an AGI?

I'm glad somebody is finally asking the right question, instead of skipping 
over the specification to the design phase. It would avoid a lot of 
philosophical discussions that result from people having different ideas of 
what AGI should do.

AGI could replace all human labor, worth about US $2 to $5 quadrillion over the 
next 30 years. We should expect the cost to be of this magnitude, given that 
having it sooner is better than waiting.

I think AGI will be immensely complex, on the order of 10^18 bits, 
decentralized, competitive, with distributed ownership, like today's internet 
but smarter. It will converse with you fluently but know too much to pass the 
Turing test. We will be totally dependent on it.

-- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Matt Mahoney
Nobody wants to enter a mental state where thinking and awareness are 
unpleasant, at least when I describe it that way. My point is that having 
everything you want is not the utopia that many people think it is. But it is 
where we are headed.

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 9:18:05 AM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))

 No, the state of ultimate bliss that you, I, and all other rational, goal 
 seeking agents seek

Your second statement copied below not withstanding, I *don't* seek ultimate 
bliss.

 You may say that is not what you want, but only because you are unaware of 
 the possibilities of reprogramming your brain. It is like being opposed to 
 drugs or wireheading. Once you experience it, you can't resist.

It is not what I want *NOW*.  It may be that once my brain has been altered 
by experiencing it, I may want it *THEN* but that has no relevance to what I 
want and seek now.

These statements are just sloppy reasoning . . . .


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser
   Parasites are very successful at surviving but they don't have other 
goals.  Try being parasitic *and* succeeding at goals other than survival. 
I think you'll find that your parasitic ways will rapidly get in the way of 
your other goals the second that you need help (or even non-interference) 
from others.


- Original Message - 
From: Terren Suydam [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 5:03 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))





--- On Thu, 8/28/08, Mark Waser [EMAIL PROTECTED] wrote:

Actually, I *do* define good and ethics not only in
evolutionary terms but
as being driven by evolution.  Unlike most people, I
believe that ethics is
*entirely* driven by what is best evolutionarily while not
believing at all
in red in tooth and claw.  I can give you a
reading list that shows that
the latter view is horribly outdated among people who keep
up with the
research rather than just rehashing tired old ideas.


I think it's a stretch to derive ethical ideas from what you refer to as 
best evolutionarily.  Parasites are pretty freaking successful, from an 
evolutionary point of view, but nobody would say parasitism is ethical.


Terren





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Terren Suydam

Hi Mark,

Obviously you need to complicated your original statement I believe that 
ethics is *entirely* driven by what is best evolutionarily... in such a way 
that we don't derive ethics from parasites. You did that by invoking social 
behavior - parasites are not social beings. 

So from there you need to identify how evolution operates in social groups in 
such a way that you can derive ethics. As Matt alluded to before, would you 
agree that ethics is the result of group selection? In other words, that human 
collectives with certain taboos make the group as a whole more likely to 
persist?

Terren


--- On Thu, 8/28/08, Mark Waser [EMAIL PROTECTED] wrote:

 From: Mark Waser [EMAIL PROTECTED]
 Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
 Re: [agi] The Necessity of Embodiment))
 To: agi@v2.listbox.com
 Date: Thursday, August 28, 2008, 9:21 PM
 Parasites are very successful at surviving but they
 don't have other 
 goals.  Try being parasitic *and* succeeding at goals other
 than survival. 
 I think you'll find that your parasitic ways will
 rapidly get in the way of 
 your other goals the second that you need help (or even
 non-interference) 
 from others.
 
 - Original Message - 
 From: Terren Suydam [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Thursday, August 28, 2008 5:03 PM
 Subject: Re: AGI goals (was Re: Information theoretic
 approaches to AGI (was 
 Re: [agi] The Necessity of Embodiment))
 
 
 
  --- On Thu, 8/28/08, Mark Waser
 [EMAIL PROTECTED] wrote:
  Actually, I *do* define good and ethics not only
 in
  evolutionary terms but
  as being driven by evolution.  Unlike most people,
 I
  believe that ethics is
  *entirely* driven by what is best evolutionarily
 while not
  believing at all
  in red in tooth and claw.  I can give
 you a
  reading list that shows that
  the latter view is horribly outdated among people
 who keep
  up with the
  research rather than just rehashing tired old
 ideas.
 
  I think it's a stretch to derive ethical ideas
 from what you refer to as 
  best evolutionarily.  Parasites are pretty
 freaking successful, from an 
  evolutionary point of view, but nobody would say
 parasitism is ethical.
 
  Terren
 
 
 
 
 
  ---
  agi
  Archives:
 https://www.listbox.com/member/archive/303/=now
  RSS Feed:
 https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: 
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
  
 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Matt Mahoney
An AGI will not design its goals. It is up to humans to define the goals of an 
AGI, so that it will do what we want it to do.

Unfortunately, this is a problem. We may or may not be successful in 
programming the goals of AGI to satisfy human goals. If we are not successful, 
then AGI will be useless at best and dangerous at worst. If we are successful, 
then we are doomed because human goals evolved in a primitive environment to 
maximize reproductive success and not in an environment where advanced 
technology can give us whatever we want. AGI will allow us to connect our 
brains to simulated worlds with magic genies, or worse, allow us to directly 
reprogram our brains to alter our memories, goals, and thought processes. All 
rational goal-seeking agents must have a mental state of maximum utility where 
any thought or perception would be unpleasant because it would result in a 
different state.

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Valentina Poletti [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, August 26, 2008 11:34:56 AM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)


Thanks very much for the info. I found those articles very interesting. 
Actually though this is not quite what I had in mind with the term 
information-theoretic approach. I wasn't very specific, my bad. What I am 
looking for is a a theory behind the actual R itself. These approaches 
(correnct me if I'm wrong) give an r-function for granted and work from that. 
In real life that is not the case though. What I'm looking for is how the AGI 
will create that function. Because the AGI is created by humans, some sort of 
direction will be given by the humans creating them. What kind of direction, in 
mathematical terms, is my question. In other words I'm looking for a way to 
mathematically define how the AGI will mathematically define its goals.
 
Valentina

 
On 8/23/08, Matt Mahoney [EMAIL PROTECTED] wrote: 
Valentina Poletti [EMAIL PROTECTED] wrote:
 I was wondering why no-one had brought up the information-theoretic aspect of 
 this yet.

It has been studied. For example, Hutter proved that the optimal strategy of a 
rational goal seeking agent in an unknown computable environment is AIXI: to 
guess that the environment is simulated by the shortest program consistent with 
observation so far [1]. Legg and Hutter also propose as a measure of universal 
intelligence the expected reward over a Solomonoff distribution of environments 
[2].

These have profound impacts on AGI design. First, AIXI is (provably) not 
computable, which means there is no easy shortcut to AGI. Second, universal 
intelligence is not computable because it requires testing in an infinite 
number of environments. Since there is no other well accepted test of 
intelligence above human level, it casts doubt on the main premise of the 
singularity: that if humans can create agents with greater than human 
intelligence, then so can they.

Prediction is central to intelligence, as I argue in [3]. Legg proved in [4] 
that there is no elegant theory of prediction. Predicting all environments up 
to a given level of Kolmogorov complexity requires a predictor with at least 
the same level of complexity. Furthermore, above a small level of complexity, 
such predictors cannot be proven because of Godel incompleteness. Prediction 
must therefore be an experimental science.

There is currently no software or mathematical model of non-evolutionary 
recursive self improvement, even for very restricted or simple definitions of 
intelligence. Without a model you don't have friendly AI; you have accelerated 
evolution with AIs competing for resources.

References

1. Hutter, Marcus (2003), A Gentle Introduction to The Universal Algorithmic 
Agent {AIXI},
in Artificial General Intelligence, B. Goertzel and C. Pennachin eds., 
Springer. http://www.idsia.ch/~marcus/ai/aixigentle.htm

2. Legg, Shane, and Marcus Hutter (2006),
A Formal Measure of Machine Intelligence, Proc. Annual machine
learning conference of Belgium and The Netherlands (Benelearn-2006).
Ghent, 2006.  http://www.vetta.org/documents/ui_benelearn.pdf

3. http://cs.fit.edu/~mmahoney/compression/rationale.html

4. Legg, Shane, (2006), Is There an Elegant Universal Theory of Prediction?,
Technical Report IDSIA-12-06, IDSIA / USI-SUPSI,
Dalle Molle Institute for Artificial Intelligence, Galleria 2, 6928 Manno, 
Switzerland.
http://www.vetta.org/documents/IDSIA-12-06-1.pdf

-- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser
 All rational goal-seeking agents must have a mental state of maximum utility 
 where any thought or perception would be unpleasant because it would result 
 in a different state.

I'd love to see you attempt to prove the above statement.

What if there are several states with utility equal to or very close to the 
maximum?  What if the utility of the state decreases the longer that you are in 
it (something that is *very* true of human beings)?  What if uniqueness raises 
the utility of any new state sufficient that there will always be states that 
are better than the current state (since experiencing uniqueness normally 
improves fitness through learning, etc)?

  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Wednesday, August 27, 2008 10:52 AM
  Subject: AGI goals (was Re: Information theoretic approaches to AGI (was Re: 
[agi] The Necessity of Embodiment))


  An AGI will not design its goals. It is up to humans to define the goals of 
an AGI, so that it will do what we want it to do.

  Unfortunately, this is a problem. We may or may not be successful in 
programming the goals of AGI to satisfy human goals. If we are not successful, 
then AGI will be useless at best and dangerous at worst. If we are successful, 
then we are doomed because human goals evolved in a primitive environment to 
maximize reproductive success and not in an environment where advanced 
technology can give us whatever we want. AGI will allow us to connect our 
brains to simulated worlds with magic genies, or worse, allow us to directly 
reprogram our brains to alter our memories, goals, and thought processes. All 
rational goal-seeking agents must have a mental state of maximum utility where 
any thought or perception would be unpleasant because it would result in a 
different state.


  -- Matt Mahoney, [EMAIL PROTECTED]




  - Original Message 
  From: Valentina Poletti [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Tuesday, August 26, 2008 11:34:56 AM
  Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)


  Thanks very much for the info. I found those articles very interesting. 
Actually though this is not quite what I had in mind with the term 
information-theoretic approach. I wasn't very specific, my bad. What I am 
looking for is a a theory behind the actual R itself. These approaches 
(correnct me if I'm wrong) give an r-function for granted and work from that. 
In real life that is not the case though. What I'm looking for is how the AGI 
will create that function. Because the AGI is created by humans, some sort of 
direction will be given by the humans creating them. What kind of direction, in 
mathematical terms, is my question. In other words I'm looking for a way to 
mathematically define how the AGI will mathematically define its goals.

  Valentina

   
  On 8/23/08, Matt Mahoney [EMAIL PROTECTED] wrote: 
Valentina Poletti [EMAIL PROTECTED] wrote:
 I was wondering why no-one had brought up the information-theoretic 
aspect of this yet.

It has been studied. For example, Hutter proved that the optimal strategy 
of a rational goal seeking agent in an unknown computable environment is AIXI: 
to guess that the environment is simulated by the shortest program consistent 
with observation so far [1]. Legg and Hutter also propose as a measure of 
universal intelligence the expected reward over a Solomonoff distribution of 
environments [2].

These have profound impacts on AGI design. First, AIXI is (provably) not 
computable, which means there is no easy shortcut to AGI. Second, universal 
intelligence is not computable because it requires testing in an infinite 
number of environments. Since there is no other well accepted test of 
intelligence above human level, it casts doubt on the main premise of the 
singularity: that if humans can create agents with greater than human 
intelligence, then so can they.

Prediction is central to intelligence, as I argue in [3]. Legg proved in 
[4] that there is no elegant theory of prediction. Predicting all environments 
up to a given level of Kolmogorov complexity requires a predictor with at least 
the same level of complexity. Furthermore, above a small level of complexity, 
such predictors cannot be proven because of Godel incompleteness. Prediction 
must therefore be an experimental science.

There is currently no software or mathematical model of non-evolutionary 
recursive self improvement, even for very restricted or simple definitions of 
intelligence. Without a model you don't have friendly AI; you have accelerated 
evolution with AIs competing for resources.

References

1. Hutter, Marcus (2003), A Gentle Introduction to The Universal 
Algorithmic Agent {AIXI},
in Artificial General Intelligence, B. Goertzel and C. Pennachin eds., 
Springer. http://www.idsia.ch/~marcus/ai/aixigentle.htm

2. Legg, Shane, and Marcus Hutter (2006),

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser
 It is up to humans to define the goals of an AGI, so that it will do what we 
 want it to do.

Why must we define the goals of an AGI?  What would be wrong with setting it 
off with strong incentives to be helpful, even stronger incentives to not be 
harmful, and let it chart it's own course based upon the vagaries of the world? 
 Let it's only hard-coded goal be to keep it's satisfaction above a certain 
level with helpful actions increasing satisfaction, harmful actions heavily 
decreasing satisfaction; learning increasing satisfaction, and satisfaction 
naturally decaying over time so as to promote action . . . .

Seems to me that humans are pretty much coded that way (with evolution's 
additional incentives of self-defense and procreation).  The real trick of the 
matter is defining helpful and harmful clearly but everyone is still mired five 
steps before that.



  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Wednesday, August 27, 2008 10:52 AM
  Subject: AGI goals (was Re: Information theoretic approaches to AGI (was Re: 
[agi] The Necessity of Embodiment))


  An AGI will not design its goals. It is up to humans to define the goals of 
an AGI, so that it will do what we want it to do.

  Unfortunately, this is a problem. We may or may not be successful in 
programming the goals of AGI to satisfy human goals. If we are not successful, 
then AGI will be useless at best and dangerous at worst. If we are successful, 
then we are doomed because human goals evolved in a primitive environment to 
maximize reproductive success and not in an environment where advanced 
technology can give us whatever we want. AGI will allow us to connect our 
brains to simulated worlds with magic genies, or worse, allow us to directly 
reprogram our brains to alter our memories, goals, and thought processes. All 
rational goal-seeking agents must have a mental state of maximum utility where 
any thought or perception would be unpleasant because it would result in a 
different state.


  -- Matt Mahoney, [EMAIL PROTECTED]




  - Original Message 
  From: Valentina Poletti [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Tuesday, August 26, 2008 11:34:56 AM
  Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)


  Thanks very much for the info. I found those articles very interesting. 
Actually though this is not quite what I had in mind with the term 
information-theoretic approach. I wasn't very specific, my bad. What I am 
looking for is a a theory behind the actual R itself. These approaches 
(correnct me if I'm wrong) give an r-function for granted and work from that. 
In real life that is not the case though. What I'm looking for is how the AGI 
will create that function. Because the AGI is created by humans, some sort of 
direction will be given by the humans creating them. What kind of direction, in 
mathematical terms, is my question. In other words I'm looking for a way to 
mathematically define how the AGI will mathematically define its goals.

  Valentina

   
  On 8/23/08, Matt Mahoney [EMAIL PROTECTED] wrote: 
Valentina Poletti [EMAIL PROTECTED] wrote:
 I was wondering why no-one had brought up the information-theoretic 
aspect of this yet.

It has been studied. For example, Hutter proved that the optimal strategy 
of a rational goal seeking agent in an unknown computable environment is AIXI: 
to guess that the environment is simulated by the shortest program consistent 
with observation so far [1]. Legg and Hutter also propose as a measure of 
universal intelligence the expected reward over a Solomonoff distribution of 
environments [2].

These have profound impacts on AGI design. First, AIXI is (provably) not 
computable, which means there is no easy shortcut to AGI. Second, universal 
intelligence is not computable because it requires testing in an infinite 
number of environments. Since there is no other well accepted test of 
intelligence above human level, it casts doubt on the main premise of the 
singularity: that if humans can create agents with greater than human 
intelligence, then so can they.

Prediction is central to intelligence, as I argue in [3]. Legg proved in 
[4] that there is no elegant theory of prediction. Predicting all environments 
up to a given level of Kolmogorov complexity requires a predictor with at least 
the same level of complexity. Furthermore, above a small level of complexity, 
such predictors cannot be proven because of Godel incompleteness. Prediction 
must therefore be an experimental science.

There is currently no software or mathematical model of non-evolutionary 
recursive self improvement, even for very restricted or simple definitions of 
intelligence. Without a model you don't have friendly AI; you have accelerated 
evolution with AIs competing for resources.

References

1. Hutter, Marcus (2003), A Gentle Introduction to The Universal 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Abram Demski
Mark,

I agree that we are mired 5 steps before that; after all, AGI is not
solved yet, and it is awfully hard to design prefab concepts in a
knowledge representation we know nothing about!

But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?

--Abram

On Wed, Aug 27, 2008 at 11:09 AM, Mark Waser [EMAIL PROTECTED] wrote:
 It is up to humans to define the goals of an AGI, so that it will do what
 we want it to do.

 Why must we define the goals of an AGI?  What would be wrong with setting it
 off with strong incentives to be helpful, even stronger incentives to not be
 harmful, and let it chart it's own course based upon the vagaries of the
 world?  Let it's only hard-coded goal be to keep it's satisfaction above a
 certain level with helpful actions increasing satisfaction, harmful actions
 heavily decreasing satisfaction; learning increasing satisfaction, and
 satisfaction naturally decaying over time so as to promote action . . . .

 Seems to me that humans are pretty much coded that way (with evolution's
 additional incentives of self-defense and procreation).  The real trick of
 the matter is defining helpful and harmful clearly but everyone is still
 mired five steps before that.


 - Original Message -
 From: Matt Mahoney
 To: agi@v2.listbox.com
 Sent: Wednesday, August 27, 2008 10:52 AM
 Subject: AGI goals (was Re: Information theoretic approaches to AGI (was Re:
 [agi] The Necessity of Embodiment))
 An AGI will not design its goals. It is up to humans to define the goals of
 an AGI, so that it will do what we want it to do.

 Unfortunately, this is a problem. We may or may not be successful in
 programming the goals of AGI to satisfy human goals. If we are not
 successful, then AGI will be useless at best and dangerous at worst. If we
 are successful, then we are doomed because human goals evolved in a
 primitive environment to maximize reproductive success and not in an
 environment where advanced technology can give us whatever we want. AGI will
 allow us to connect our brains to simulated worlds with magic genies, or
 worse, allow us to directly reprogram our brains to alter our memories,
 goals, and thought processes. All rational goal-seeking agents must have a
 mental state of maximum utility where any thought or perception would be
 unpleasant because it would result in a different state.

 -- Matt Mahoney, [EMAIL PROTECTED]

 - Original Message 
 From: Valentina Poletti [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Tuesday, August 26, 2008 11:34:56 AM
 Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The
 Necessity of Embodiment)

 Thanks very much for the info. I found those articles very interesting.
 Actually though this is not quite what I had in mind with the term
 information-theoretic approach. I wasn't very specific, my bad. What I am
 looking for is a a theory behind the actual R itself. These approaches
 (correnct me if I'm wrong) give an r-function for granted and work from
 that. In real life that is not the case though. What I'm looking for is how
 the AGI will create that function. Because the AGI is created by humans,
 some sort of direction will be given by the humans creating them. What kind
 of direction, in mathematical terms, is my question. In other words I'm
 looking for a way to mathematically define how the AGI will mathematically
 define its goals.

 Valentina


 On 8/23/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Valentina Poletti [EMAIL PROTECTED] wrote:
  I was wondering why no-one had brought up the information-theoretic
  aspect of this yet.

 It has been studied. For example, Hutter proved that the optimal strategy
 of a rational goal seeking agent in an unknown computable environment is
 AIXI: to guess that the environment is simulated by the shortest program
 consistent with observation so far [1]. Legg and Hutter also propose as a
 measure of universal intelligence the expected reward over a Solomonoff
 distribution of environments [2].

 These have profound impacts on AGI design. First, AIXI is (provably) not
 computable, which means there is no easy shortcut to AGI. Second, universal
 intelligence is not computable because it requires testing in an infinite
 number of environments. Since there is no other well accepted test of
 intelligence above human level, it casts doubt on the main premise of the
 singularity: that if humans can create agents with greater than human
 intelligence, then so can they.

 Prediction is central to intelligence, as I argue in [3]. Legg proved in
 [4] that there is no elegant theory of prediction. Predicting all
 environments up to a given level of Kolmogorov complexity requires a
 predictor with at least the same level of complexity. Furthermore, above a
 small level of complexity, such 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser

But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?


Actually, my description gave the AGI four goals: be helpful, don't be 
harmful, learn, and keep moving.


Learn, all by itself, is going to generate an infinite number of subgoals. 
Learning subgoals will be picked based upon what is most likely to learn the 
most while not being harmful.


(and, by the way, be helpful and learn should both generate a 
self-protection sub-goal  in short order with procreation following 
immediately behind)


Arguably, be helpful would generate all three of the other goals but 
learning and not being harmful without being helpful is a *much* better 
goal-set for a novice AI to prevent accidents when the AI thinks it is 
being helpful.  In fact, I've been tempted at times to entirely drop the be 
helpful since the other two will eventually generate it with a lessened 
probability of trying-to-be-helpful accidents.


Don't be harmful by itself will just turn the AI off.

The trick is that there needs to be a balance between goals.  Any single 
goal intelligence is likely to be lethal even if that goal is to help 
humanity.


Learn, do no harm, help.  Can anyone come up with a better set of goals? 
(and, once again, note that learn does *not* override the other two -- there 
is meant to be a balance between the three).


- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 11:52 AM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Mark,

I agree that we are mired 5 steps before that; after all, AGI is not
solved yet, and it is awfully hard to design prefab concepts in a
knowledge representation we know nothing about!

But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?

--Abram

On Wed, Aug 27, 2008 at 11:09 AM, Mark Waser [EMAIL PROTECTED] wrote:
It is up to humans to define the goals of an AGI, so that it will do 
what

we want it to do.


Why must we define the goals of an AGI?  What would be wrong with setting 
it
off with strong incentives to be helpful, even stronger incentives to not 
be

harmful, and let it chart it's own course based upon the vagaries of the
world?  Let it's only hard-coded goal be to keep it's satisfaction above 
a
certain level with helpful actions increasing satisfaction, harmful 
actions

heavily decreasing satisfaction; learning increasing satisfaction, and
satisfaction naturally decaying over time so as to promote action . . . .

Seems to me that humans are pretty much coded that way (with evolution's
additional incentives of self-defense and procreation).  The real trick 
of

the matter is defining helpful and harmful clearly but everyone is still
mired five steps before that.


- Original Message -
From: Matt Mahoney
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 10:52 AM
Subject: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re:

[agi] The Necessity of Embodiment))
An AGI will not design its goals. It is up to humans to define the goals 
of

an AGI, so that it will do what we want it to do.

Unfortunately, this is a problem. We may or may not be successful in
programming the goals of AGI to satisfy human goals. If we are not
successful, then AGI will be useless at best and dangerous at worst. If 
we

are successful, then we are doomed because human goals evolved in a
primitive environment to maximize reproductive success and not in an
environment where advanced technology can give us whatever we want. AGI 
will

allow us to connect our brains to simulated worlds with magic genies, or
worse, allow us to directly reprogram our brains to alter our memories,
goals, and thought processes. All rational goal-seeking agents must have 
a

mental state of maximum utility where any thought or perception would be
unpleasant because it would result in a different state.

-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Valentina Poletti [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, August 26, 2008 11:34:56 AM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The
Necessity of Embodiment)

Thanks very much for the info. I found those articles very interesting.
Actually though this is not quite what I had in mind with the term
information-theoretic approach. I wasn't very specific, my bad. What I am
looking for is a a theory behind the actual R itself. These approaches
(correnct me if I'm wrong) give an r-function for granted and work from
that. In real life that is not the case 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Vladimir Nesov
On Wed, Aug 27, 2008 at 8:32 PM, Mark Waser [EMAIL PROTECTED] wrote:
 But, how does your description not correspond to giving the AGI the
 goals of being helpful and not harmful? In other words, what more does
 it do than simply try for these? Does it pick goals randomly such that
 they conflict only minimally with these?

 Actually, my description gave the AGI four goals: be helpful, don't be
 harmful, learn, and keep moving.

 Learn, all by itself, is going to generate an infinite number of subgoals.
 Learning subgoals will be picked based upon what is most likely to learn the
 most while not being harmful.

 (and, by the way, be helpful and learn should both generate a
 self-protection sub-goal  in short order with procreation following
 immediately behind)

 Arguably, be helpful would generate all three of the other goals but
 learning and not being harmful without being helpful is a *much* better
 goal-set for a novice AI to prevent accidents when the AI thinks it is
 being helpful.  In fact, I've been tempted at times to entirely drop the be
 helpful since the other two will eventually generate it with a lessened
 probability of trying-to-be-helpful accidents.

 Don't be harmful by itself will just turn the AI off.

 The trick is that there needs to be a balance between goals.  Any single
 goal intelligence is likely to be lethal even if that goal is to help
 humanity.

 Learn, do no harm, help.  Can anyone come up with a better set of goals?
 (and, once again, note that learn does *not* override the other two -- there
 is meant to be a balance between the three).


And AGI will just read the command, help, 'h'-'e'-'l'-'p', and will
know exactly what to do, and will be convinced to do it. To implement
this simple goal, you need to somehow communicate its functional
structure in the AGI, this won't just magically happen. Don't talk
about AGI as if it was a human, think about how exactly to implement
what you want. Today's rant on Overcoming Bias applies fully to such
suggestions ( http://www.overcomingbias.com/2008/08/dreams-of-ai-de.html
).


-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Abram Demski
Mark,

OK, I take up the challenge. Here is a different set of goal-axioms:

-Good is a property of some entities.
-Maximize good in the world.
-A more-good entity is usually more likely to cause goodness than a
less-good entity.
-A more-good entity is often more likely to cause pleasure than a
less-good entity.
-Self is the entity that causes my actions.
-An entity with properties similar to self is more likely to be good.

Pleasure, unlike goodness, is directly observable. It comes from many
sources. For example:
-Learning is pleasurable.
-A full battery is pleasurable (if relevant).
-Perhaps the color of human skin is pleasurable in and of itself.
(More specifically, all skin colors of any existing race.)
-Perhaps also the sound of a human voice is pleasurable.
-Other things may be pleasurable depending on what we initially want
the AI to enjoy doing.

So, the definition if good is highly probabilistic, and the system's
inferences about goodness will depend on its experiences; but pleasure
can be directly observed, and the pleasure-mechanisms remain fixed.

On Wed, Aug 27, 2008 at 12:32 PM, Mark Waser [EMAIL PROTECTED] wrote:
 But, how does your description not correspond to giving the AGI the
 goals of being helpful and not harmful? In other words, what more does
 it do than simply try for these? Does it pick goals randomly such that
 they conflict only minimally with these?

 Actually, my description gave the AGI four goals: be helpful, don't be
 harmful, learn, and keep moving.

 Learn, all by itself, is going to generate an infinite number of subgoals.
 Learning subgoals will be picked based upon what is most likely to learn the
 most while not being harmful.

 (and, by the way, be helpful and learn should both generate a
 self-protection sub-goal  in short order with procreation following
 immediately behind)

 Arguably, be helpful would generate all three of the other goals but
 learning and not being harmful without being helpful is a *much* better
 goal-set for a novice AI to prevent accidents when the AI thinks it is
 being helpful.  In fact, I've been tempted at times to entirely drop the be
 helpful since the other two will eventually generate it with a lessened
 probability of trying-to-be-helpful accidents.

 Don't be harmful by itself will just turn the AI off.

 The trick is that there needs to be a balance between goals.  Any single
 goal intelligence is likely to be lethal even if that goal is to help
 humanity.

 Learn, do no harm, help.  Can anyone come up with a better set of goals?
 (and, once again, note that learn does *not* override the other two -- there
 is meant to be a balance between the three).

 - Original Message - From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Wednesday, August 27, 2008 11:52 AM
 Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
 AGI (was Re: [agi] The Necessity of Embodiment))


 Mark,

 I agree that we are mired 5 steps before that; after all, AGI is not
 solved yet, and it is awfully hard to design prefab concepts in a
 knowledge representation we know nothing about!

 But, how does your description not correspond to giving the AGI the
 goals of being helpful and not harmful? In other words, what more does
 it do than simply try for these? Does it pick goals randomly such that
 they conflict only minimally with these?

 --Abram

 On Wed, Aug 27, 2008 at 11:09 AM, Mark Waser [EMAIL PROTECTED] wrote:

 It is up to humans to define the goals of an AGI, so that it will do
 what
 we want it to do.

 Why must we define the goals of an AGI?  What would be wrong with setting
 it
 off with strong incentives to be helpful, even stronger incentives to not
 be
 harmful, and let it chart it's own course based upon the vagaries of the
 world?  Let it's only hard-coded goal be to keep it's satisfaction above
 a
 certain level with helpful actions increasing satisfaction, harmful
 actions
 heavily decreasing satisfaction; learning increasing satisfaction, and
 satisfaction naturally decaying over time so as to promote action . . . .

 Seems to me that humans are pretty much coded that way (with evolution's
 additional incentives of self-defense and procreation).  The real trick
 of
 the matter is defining helpful and harmful clearly but everyone is still
 mired five steps before that.


 - Original Message -
 From: Matt Mahoney
 To: agi@v2.listbox.com
 Sent: Wednesday, August 27, 2008 10:52 AM
 Subject: AGI goals (was Re: Information theoretic approaches to AGI (was
 Re:
 [agi] The Necessity of Embodiment))
 An AGI will not design its goals. It is up to humans to define the goals
 of
 an AGI, so that it will do what we want it to do.

 Unfortunately, this is a problem. We may or may not be successful in
 programming the goals of AGI to satisfy human goals. If we are not
 successful, then AGI will be useless at best and dangerous at worst. If
 we
 are successful, then we are doomed because human goals 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser

Hi,

   A number of problems unfortunately . . . .


-Learning is pleasurable.


. . . . for humans.  We can choose whether to make it so for machines or 
not.  Doing so would be equivalent to setting a goal of learning.



-Other things may be pleasurable depending on what we initially want
the AI to enjoy doing.


   See . . . all you've done here is pushed goal-setting to 
pleasure-setting . . . .


= = = = =

   Further, if you judge goodness by pleasure, you'll probably create an 
AGI whose shortest path-to-goal is to wirehead the universe (which I 
consider to be a seriously suboptimal situation - YMMV).





- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 2:25 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Mark,

OK, I take up the challenge. Here is a different set of goal-axioms:

-Good is a property of some entities.
-Maximize good in the world.
-A more-good entity is usually more likely to cause goodness than a
less-good entity.
-A more-good entity is often more likely to cause pleasure than a
less-good entity.
-Self is the entity that causes my actions.
-An entity with properties similar to self is more likely to be good.

Pleasure, unlike goodness, is directly observable. It comes from many
sources. For example:
-Learning is pleasurable.
-A full battery is pleasurable (if relevant).
-Perhaps the color of human skin is pleasurable in and of itself.
(More specifically, all skin colors of any existing race.)
-Perhaps also the sound of a human voice is pleasurable.
-Other things may be pleasurable depending on what we initially want
the AI to enjoy doing.

So, the definition if good is highly probabilistic, and the system's
inferences about goodness will depend on its experiences; but pleasure
can be directly observed, and the pleasure-mechanisms remain fixed.

On Wed, Aug 27, 2008 at 12:32 PM, Mark Waser [EMAIL PROTECTED] wrote:

But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?


Actually, my description gave the AGI four goals: be helpful, don't be
harmful, learn, and keep moving.

Learn, all by itself, is going to generate an infinite number of 
subgoals.
Learning subgoals will be picked based upon what is most likely to learn 
the

most while not being harmful.

(and, by the way, be helpful and learn should both generate a
self-protection sub-goal  in short order with procreation following
immediately behind)

Arguably, be helpful would generate all three of the other goals but
learning and not being harmful without being helpful is a *much* better
goal-set for a novice AI to prevent accidents when the AI thinks it is
being helpful.  In fact, I've been tempted at times to entirely drop the 
be

helpful since the other two will eventually generate it with a lessened
probability of trying-to-be-helpful accidents.

Don't be harmful by itself will just turn the AI off.

The trick is that there needs to be a balance between goals.  Any single
goal intelligence is likely to be lethal even if that goal is to help
humanity.

Learn, do no harm, help.  Can anyone come up with a better set of goals?
(and, once again, note that learn does *not* override the other two --  
there

is meant to be a balance between the three).

- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 11:52 AM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches 
to

AGI (was Re: [agi] The Necessity of Embodiment))



Mark,

I agree that we are mired 5 steps before that; after all, AGI is not
solved yet, and it is awfully hard to design prefab concepts in a
knowledge representation we know nothing about!

But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?

--Abram

On Wed, Aug 27, 2008 at 11:09 AM, Mark Waser [EMAIL PROTECTED] 
wrote:


It is up to humans to define the goals of an AGI, so that it will do
what
we want it to do.


Why must we define the goals of an AGI?  What would be wrong with 
setting

it
off with strong incentives to be helpful, even stronger incentives to 
not

be
harmful, and let it chart it's own course based upon the vagaries of 
the
world?  Let it's only hard-coded goal be to keep it's satisfaction 
above

a
certain level with helpful actions increasing satisfaction, harmful
actions
heavily decreasing satisfaction; learning increasing satisfaction, and
satisfaction naturally decaying over time so as to promote action . . . 
.


Seems to me that humans are pretty 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Abram Demski
Mark,

The main motivation behind my setup was to avoid the wirehead
scenario. That is why I make the explicit goodness/pleasure
distinction. Whatever good is, it cannot be something directly
observable, or the AI will just wirehead itself (assuming it gets
intelligent enough to do so, of course). But, goodness cannot be
completely unobservable, or the AI will have no idea what it should
do.

So, the AI needs to have a concept of external goodness, with a weak
probabilistic correlation to its directly observable pleasure. That
way, the system will go after pleasant things, but won't be able to
fool itself with things that are maximally pleasant. For example, if
it were to consider rewiring its visual circuits to see only
skin-color, it would not like the idea, because it would know that
such a move would make it less able to maximize goodness in general.
(It would know that seeing only tan does not mean that the entire
world is made of pure goodness.) An AI that was trying to maximize
pleasure would see nothing wrong with self-stimulation of this sort.

So, I think that pushing the problem of goal-setting back to
pleasure-setting is very useful for avoiding certain types of
undesirable behavior.

By the way, where does this term wireheading come from? I assume
from context that it simply means self-stimulation.

-Abram Demski

On Wed, Aug 27, 2008 at 2:58 PM, Mark Waser [EMAIL PROTECTED] wrote:
 Hi,

   A number of problems unfortunately . . . .

 -Learning is pleasurable.

 . . . . for humans.  We can choose whether to make it so for machines or
 not.  Doing so would be equivalent to setting a goal of learning.

 -Other things may be pleasurable depending on what we initially want
 the AI to enjoy doing.

   See . . . all you've done here is pushed goal-setting to pleasure-setting
 . . . .

 = = = = =

   Further, if you judge goodness by pleasure, you'll probably create an AGI
 whose shortest path-to-goal is to wirehead the universe (which I consider to
 be a seriously suboptimal situation - YMMV).




 - Original Message - From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Wednesday, August 27, 2008 2:25 PM
 Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
 AGI (was Re: [agi] The Necessity of Embodiment))


 Mark,

 OK, I take up the challenge. Here is a different set of goal-axioms:

 -Good is a property of some entities.
 -Maximize good in the world.
 -A more-good entity is usually more likely to cause goodness than a
 less-good entity.
 -A more-good entity is often more likely to cause pleasure than a
 less-good entity.
 -Self is the entity that causes my actions.
 -An entity with properties similar to self is more likely to be good.

 Pleasure, unlike goodness, is directly observable. It comes from many
 sources. For example:
 -Learning is pleasurable.
 -A full battery is pleasurable (if relevant).
 -Perhaps the color of human skin is pleasurable in and of itself.
 (More specifically, all skin colors of any existing race.)
 -Perhaps also the sound of a human voice is pleasurable.
 -Other things may be pleasurable depending on what we initially want
 the AI to enjoy doing.

 So, the definition if good is highly probabilistic, and the system's
 inferences about goodness will depend on its experiences; but pleasure
 can be directly observed, and the pleasure-mechanisms remain fixed.

 On Wed, Aug 27, 2008 at 12:32 PM, Mark Waser [EMAIL PROTECTED] wrote:

 But, how does your description not correspond to giving the AGI the
 goals of being helpful and not harmful? In other words, what more does
 it do than simply try for these? Does it pick goals randomly such that
 they conflict only minimally with these?

 Actually, my description gave the AGI four goals: be helpful, don't be
 harmful, learn, and keep moving.

 Learn, all by itself, is going to generate an infinite number of
 subgoals.
 Learning subgoals will be picked based upon what is most likely to learn
 the
 most while not being harmful.

 (and, by the way, be helpful and learn should both generate a
 self-protection sub-goal  in short order with procreation following
 immediately behind)

 Arguably, be helpful would generate all three of the other goals but
 learning and not being harmful without being helpful is a *much* better
 goal-set for a novice AI to prevent accidents when the AI thinks it is
 being helpful.  In fact, I've been tempted at times to entirely drop the
 be
 helpful since the other two will eventually generate it with a lessened
 probability of trying-to-be-helpful accidents.

 Don't be harmful by itself will just turn the AI off.

 The trick is that there needs to be a balance between goals.  Any single
 goal intelligence is likely to be lethal even if that goal is to help
 humanity.

 Learn, do no harm, help.  Can anyone come up with a better set of goals?
 (and, once again, note that learn does *not* override the other two --
  there
 is meant to be a balance between the 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread BillK
On Wed, Aug 27, 2008 at 8:43 PM, Abram Demski  wrote:
snip
 By the way, where does this term wireheading come from? I assume
 from context that it simply means self-stimulation.


Science Fiction novels.

http://en.wikipedia.org/wiki/Wirehead
In Larry Niven's Known Space stories, a wirehead is someone who has
been fitted with an electronic brain implant (called a droud in the
stories) to stimulate the pleasure centers of their brain.

In 2006, The Guardian reported that trials of Deep brain stimulation
with electric current, via wires inserted into the brain, had
successfully lifted the mood of depression sufferers.[1] This is
exactly the method used by wireheads in the earlier Niven stories
(such as the 'Gil the Arm' story Death By Ectasy).

In the Shaper/Mechanist stories of Bruce Sterling, wirehead is the
Mechanist term for a human who has given up corporeal existence and
become an infomorph.
--


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Matt Mahoney
See also http://wireheading.com/

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: BillK [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 4:50:56 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))

On Wed, Aug 27, 2008 at 8:43 PM, Abram Demski  wrote:
snip
 By the way, where does this term wireheading come from? I assume
 from context that it simply means self-stimulation.


Science Fiction novels.

http://en.wikipedia.org/wiki/Wirehead
In Larry Niven's Known Space stories, a wirehead is someone who has
been fitted with an electronic brain implant (called a droud in the
stories) to stimulate the pleasure centers of their brain.

In 2006, The Guardian reported that trials of Deep brain stimulation
with electric current, via wires inserted into the brain, had
successfully lifted the mood of depression sufferers.[1] This is
exactly the method used by wireheads in the earlier Niven stories
(such as the 'Gil the Arm' story Death By Ectasy).

In the Shaper/Mechanist stories of Bruce Sterling, wirehead is the
Mechanist term for a human who has given up corporeal existence and
become an infomorph.
--


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Matt Mahoney
Mark Waser [EMAIL PROTECTED] wrote:

 All rational goal-seeking agents must have a mental state of maximum utility 
 where 
any thought or perception would be unpleasant because it would result in a 
different state.

I'd love to see you attempt to prove the above statement.
 
What if there are several states with utility equal to or very close to the 
maximum?

Then you will be indifferent as to whether you stay in one state or move 
between them.

What if the utility of the state decreases the longer that you are in it 
(something that is *very* true of human 
beings)?

If you are aware of the passage of time, then you are not staying in the same 
state.

What if uniqueness raises the utility of any new state sufficient 
that there will always be states that are better than the current state (since 
experiencing uniqueness normally improves fitness through learning, 
etc)?

Then you are not rational because your utility function does not define a total 
order. If you prefer A to B and B to C and C to A, as in the case you 
described, then you can be exploited. If you are rational and you have a finite 
number of states, then there is at least one state for which there is no better 
state. The human brain is certainly finite, and has at most 2^(10^15) states.

 -- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Charles Hixson

Matt Mahoney wrote:
An AGI will not design its goals. It is up to humans to define the 
goals of an AGI, so that it will do what we want it to do.
Are you certain that this is the optimal approach?  To me it seems more 
promising to design the motives, and to allow the AGI to design it's own 
goals to satisfy those motives.  This provides less fine grained control 
over the AGI, but I feel that a fine-grained control would be 
counter-productive.


To me the difficulty is designing the motives of the AGI in such a way 
that they will facilitate human life, when they must be implanted in an 
AGI that currently has no concept of an external universe, much less any 
particular classes of inhabitant therein.  The only (partial) solution 
that I've been able to come up with so far (i.e., identify, not design) 
is based around imprinting.  This is fine for the first generation 
(probably, if everything is done properly), but it's not clear that it 
would be fine for the second generation et seq.  For this reason RSI is 
very important.  It allows all succeeding generations to be derived from 
the first by cloning, which would preserve the initial imprints.


Unfortunately, this is a problem. We may or may not be successful in 
programming the goals of AGI to satisfy human goals. If we are not 
successful, ... unpleasant because it would result in a different state.
 
-- Matt Mahoney, [EMAIL PROTECTED]


Failure is an extreme danger, but it's not only failure to design safely 
that's a danger.  Failure to design a successful AGI at all could be 
nearly as great a danger.  Society has become too complex to be safely 
managed by the current approaches...and things aren't getting any simpler.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser
What if the utility of the state decreases the longer that you are in it 
(something that is *very* true of human

beings)?
If you are aware of the passage of time, then you are not staying in the 
same state.


I have to laugh.  So you agree that all your arguments don't apply to 
anything that is aware of the passage of time?  That makes them really 
useful, doesn't it.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser

Hi,

   I think that I'm missing some of your points . . . .


Whatever good is, it cannot be something directly
observable, or the AI will just wirehead itself (assuming it gets
intelligent enough to do so, of course).


I don't understand this unless you mean by directly observable that the 
definition is observable and changeable.  If I define good as making all 
humans happy without modifying them, how would the AI wirehead itself?  What 
am I missing here?



So, the AI needs to have a concept of external goodness, with a weak
probabilistic correlation to its directly observable pleasure.


I agree with the concept of external goodness but why does the correlation 
between external goodness and it's pleasure have to be low?  Why can't 
external goodness directly cause pleasure?  Clearly, it shouldn't believe 
that it's pleasure causes external goodness (that would be reversing cause 
and effect and an obvious logic error).


   Mark

P.S.  I notice that several others answered your wirehead query so I won't 
belabor the point.  :-)



- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 3:43 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Mark,

The main motivation behind my setup was to avoid the wirehead
scenario. That is why I make the explicit goodness/pleasure
distinction. Whatever good is, it cannot be something directly
observable, or the AI will just wirehead itself (assuming it gets
intelligent enough to do so, of course). But, goodness cannot be
completely unobservable, or the AI will have no idea what it should
do.

So, the AI needs to have a concept of external goodness, with a weak
probabilistic correlation to its directly observable pleasure. That
way, the system will go after pleasant things, but won't be able to
fool itself with things that are maximally pleasant. For example, if
it were to consider rewiring its visual circuits to see only
skin-color, it would not like the idea, because it would know that
such a move would make it less able to maximize goodness in general.
(It would know that seeing only tan does not mean that the entire
world is made of pure goodness.) An AI that was trying to maximize
pleasure would see nothing wrong with self-stimulation of this sort.

So, I think that pushing the problem of goal-setting back to
pleasure-setting is very useful for avoiding certain types of
undesirable behavior.

By the way, where does this term wireheading come from? I assume
from context that it simply means self-stimulation.

-Abram Demski

On Wed, Aug 27, 2008 at 2:58 PM, Mark Waser [EMAIL PROTECTED] wrote:

Hi,

  A number of problems unfortunately . . . .


-Learning is pleasurable.


. . . . for humans.  We can choose whether to make it so for machines or
not.  Doing so would be equivalent to setting a goal of learning.


-Other things may be pleasurable depending on what we initially want
the AI to enjoy doing.


  See . . . all you've done here is pushed goal-setting to 
pleasure-setting

. . . .

= = = = =

  Further, if you judge goodness by pleasure, you'll probably create an 
AGI
whose shortest path-to-goal is to wirehead the universe (which I consider 
to

be a seriously suboptimal situation - YMMV).




- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 2:25 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches 
to

AGI (was Re: [agi] The Necessity of Embodiment))



Mark,

OK, I take up the challenge. Here is a different set of goal-axioms:

-Good is a property of some entities.
-Maximize good in the world.
-A more-good entity is usually more likely to cause goodness than a
less-good entity.
-A more-good entity is often more likely to cause pleasure than a
less-good entity.
-Self is the entity that causes my actions.
-An entity with properties similar to self is more likely to be good.

Pleasure, unlike goodness, is directly observable. It comes from many
sources. For example:
-Learning is pleasurable.
-A full battery is pleasurable (if relevant).
-Perhaps the color of human skin is pleasurable in and of itself.
(More specifically, all skin colors of any existing race.)
-Perhaps also the sound of a human voice is pleasurable.
-Other things may be pleasurable depending on what we initially want
the AI to enjoy doing.

So, the definition if good is highly probabilistic, and the system's
inferences about goodness will depend on its experiences; but pleasure
can be directly observed, and the pleasure-mechanisms remain fixed.

On Wed, Aug 27, 2008 at 12:32 PM, Mark Waser [EMAIL PROTECTED] 
wrote:


But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Matt Mahoney
Mark Waser [EMAIL PROTECTED] wrote:

What if the utility of the state decreases the longer that you are in it 
(something that is *very* true of human
 beings)?
 If you are aware of the passage of time, then you are not staying in the 
 same state.

I have to laugh.  So you agree that all your arguments don't apply to 
anything that is aware of the passage of time?  That makes them really 
useful, doesn't it.

No, the state of ultimate bliss that you, I, and all other rational, goal 
seeking agents seek is a mental state in which nothing perceptible happens. 
Without thought or sensation, you would be unaware of the passage of time, or 
of anything else. If you are aware of time then you are either not in this 
state yet, or are leaving it.

You may say that is not what you want, but only because you are unaware of the 
possibilities of reprogramming your brain. It is like being opposed to drugs or 
wireheading. Once you experience it, you can't resist.

 -- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Matt Mahoney
Goals and motives are the same thing, in the sense that I mean them.
We want the AGI to want to do what we want it to do.

Failure is an extreme danger, but it's not only failure to design safely 
that's a danger.  Failure to design a successful AGI at all could be 
nearly as great a danger.  Society has become too complex to be safely 
managed by the current approaches...and things aren't getting any simpler.


No, technology is the source of complexity, not the cure for it. But that is 
what we want. Life, health, happiness, freedom from work. AGI will cost $1 
quadrillion to build, but we will build it because it is worth that much. And 
then it will kill us, not against our will, but because we want to live in 
simulated worlds with magic genies.
 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Charles Hixson [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 7:16:53 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))

Matt Mahoney wrote:
 An AGI will not design its goals. It is up to humans to define the 
 goals of an AGI, so that it will do what we want it to do.
Are you certain that this is the optimal approach?  To me it seems more 
promising to design the motives, and to allow the AGI to design it's own 
goals to satisfy those motives.  This provides less fine grained control 
over the AGI, but I feel that a fine-grained control would be 
counter-productive.

To me the difficulty is designing the motives of the AGI in such a way 
that they will facilitate human life, when they must be implanted in an 
AGI that currently has no concept of an external universe, much less any 
particular classes of inhabitant therein.  The only (partial) solution 
that I've been able to come up with so far (i.e., identify, not design) 
is based around imprinting.  This is fine for the first generation 
(probably, if everything is done properly), but it's not clear that it 
would be fine for the second generation et seq.  For this reason RSI is 
very important.  It allows all succeeding generations to be derived from 
the first by cloning, which would preserve the initial imprints.

 Unfortunately, this is a problem. We may or may not be successful in 
 programming the goals of AGI to satisfy human goals. If we are not 
 successful, ... unpleasant because it would result in a different state.
  
 -- Matt Mahoney, [EMAIL PROTECTED]

Failure is an extreme danger, but it's not only failure to design safely 
that's a danger.  Failure to design a successful AGI at all could be 
nearly as great a danger.  Society has become too complex to be safely 
managed by the current approaches...and things aren't getting any simpler.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com