Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-09-04 Thread Valentina Poletti
That sounds like a useful purpose. Yeh, I don't believe in fast and quick
methods either.. but also humans tend to overestimate their own
capabilities, so it will probably take more time than predicted.

On 9/3/08, William Pearson [EMAIL PROTECTED] wrote:

 2008/8/28 Valentina Poletti [EMAIL PROTECTED]:
  Got ya, thanks for the clarification. That brings up another question.
 Why
  do we want to make an AGI?
 
 

 To understand ourselves as intelligent agents better? It might enable
 us to have decent education policy, rehabilitation of criminals.

 Even if we don't make human like AGIs the principles should help us
 understand ourselves, just as optics of the lens helped us understand
 the eye and aerodynamics of wings helps us understand bird flight.

 It could also gives us more leverage, more brain power on the planet
 to help solve the planets problems.

 This is all predicated on the idea that fast take off is pretty much
 impossible. It is possible then all bets are off.

 Will


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-09-03 Thread Valentina Poletti
So it's about money then.. now THAT makes me feel less worried!! :)

That explains a lot though.

On 8/28/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Valentina Poletti [EMAIL PROTECTED] wrote:
  Got ya, thanks for the clarification. That brings up another question.
 Why do we want to make an AGI?

 I'm glad somebody is finally asking the right question, instead of skipping
 over the specification to the design phase. It would avoid a lot of
 philosophical discussions that result from people having different ideas of
 what AGI should do.

 AGI could replace all human labor, worth about US $2 to $5 quadrillion over
 the next 30 years. We should expect the cost to be of this magnitude, given
 that having it sooner is better than waiting.

 I think AGI will be immensely complex, on the order of 10^18 bits,
 decentralized, competitive, with distributed ownership, like today's
 internet but smarter. It will converse with you fluently but know too much
 to pass the Turing test. We will be totally dependent on it.

 -- Matt Mahoney, [EMAIL PROTECTED]






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-09-03 Thread William Pearson
2008/8/28 Valentina Poletti [EMAIL PROTECTED]:
 Got ya, thanks for the clarification. That brings up another question. Why
 do we want to make an AGI?



To understand ourselves as intelligent agents better? It might enable
us to have decent education policy, rehabilitation of criminals.

Even if we don't make human like AGIs the principles should help us
understand ourselves, just as optics of the lens helped us understand
the eye and aerodynamics of wings helps us understand bird flight.

It could also gives us more leverage, more brain power on the planet
to help solve the planets problems.

This is all predicated on the idea that fast take off is pretty much
impossible. It is possible then all bets are off.

 Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Mark Waser

Hi Terren,

Obviously you need to complicated your original statement I believe that 
ethics is *entirely* driven by what is best evolutionarily... in such a 
way that we don't derive ethics from parasites.


Saying that ethics is entirely driven by evolution is NOT the same as saying 
that evolution always results in ethics.  Ethics is 
computationally/cognitively expensive to successfully implement (because a 
stupid implementation gets exploited to death).  There are many evolutionary 
niches that won't support that expense and the successful entities in those 
niches won't be ethical.  Parasites are a prototypical/archetypal example of 
such a niche since they tend to degeneratively streamlined to the point of 
being stripped down to virtually nothing except that which is necessary for 
their parasitism.  Effectively, they are single goal entities -- the single 
most dangerous type of entity possible.



You did that by invoking social behavior - parasites are not social beings


I claim that ethics is nothing *but* social behavior.

So from there you need to identify how evolution operates in social groups 
in such a way that you can derive ethics.


OK.  How about this . . . . Ethics is that behavior that, when shown by you, 
makes me believe that I should facilitate your survival.  Obviously, it is 
then to your (evolutionary) benefit to behave ethically.


As Matt alluded to before, would you agree that ethics is the result of 
group selection? In other words, that human collectives with certain 
taboos make the group as a whole more likely to persist?


Matt is decades out of date and needs to catch up on his reading.

Ethics is *NOT* the result of group selection.  The *ethical evaluation of a 
given action* is a meme and driven by the same social/group forces as any 
other meme.  Rational memes when adopted by a group can enhance group 
survival but . . . . there are also mechanisms by which seemingly irrational 
memes can also enhance survival indirectly in *exactly* the same fashion as 
the seemingly irrational tail displays of peacocks facilitates their group 
survival by identifying the fittest individuals.  Note that it all depends 
upon circumstances . . . .


Ethics is first and foremost what society wants you to do.  But, society 
can't be too pushy in it's demands or individuals will defect and society 
will break down.  So, ethics turns into a matter of determining what is the 
behavior that is best for society (and thus the individual) without unduly 
burdening the individual (which would promote defection, cheating, etc.). 
This behavior clearly differs based upon circumstances but, equally clearly, 
should be able to be derived from a reasonably small set of rules that 
*will* be context dependent.  Marc Hauser has done a lot of research and 
human morality seems to be designed exactly that way (in terms of how it 
varies across societies as if it is based upon fairly simple rules with a 
small number of variables/variable settings.  I highly recommend his 
writings (and being familiar with them is pretty much a necessity if you 
want to have a decent advanced/current scientific discussion of ethics and 
morals).


   Mark

- Original Message - 
From: Terren Suydam [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 10:54 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))





Hi Mark,

Obviously you need to complicated your original statement I believe that 
ethics is *entirely* driven by what is best evolutionarily... in such a 
way that we don't derive ethics from parasites. You did that by invoking 
social behavior - parasites are not social beings.


So from there you need to identify how evolution operates in social groups 
in such a way that you can derive ethics. As Matt alluded to before, would 
you agree that ethics is the result of group selection? In other words, 
that human collectives with certain taboos make the group as a whole more 
likely to persist?


Terren


--- On Thu, 8/28/08, Mark Waser [EMAIL PROTECTED] wrote:


From: Mark Waser [EMAIL PROTECTED]
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI 
(was Re: [agi] The Necessity of Embodiment))

To: agi@v2.listbox.com
Date: Thursday, August 28, 2008, 9:21 PM
Parasites are very successful at surviving but they
don't have other
goals.  Try being parasitic *and* succeeding at goals other
than survival.
I think you'll find that your parasitic ways will
rapidly get in the way of
your other goals the second that you need help (or even
non-interference)
from others.

- Original Message - 
From: Terren Suydam [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 5:03 PM
Subject: Re: AGI goals (was Re: Information theoretic
approaches to AGI (was
Re: [agi] The Necessity of Embodiment))



 --- On Thu, 8/28/08, Mark Waser
[EMAIL PROTECTED] wrote:
 Actually, I *do* 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Terren Suydam

--- On Fri, 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
 Saying that ethics is entirely driven by evolution is NOT
 the same as saying 
 that evolution always results in ethics.  Ethics is 
 computationally/cognitively expensive to successfully
 implement (because a 
 stupid implementation gets exploited to death).  There are
 many evolutionary 
 niches that won't support that expense and the
 successful entities in those 
 niches won't be ethical.  Parasites are a
 prototypical/archetypal example of 
 such a niche since they tend to degeneratively streamlined
 to the point of 
 being stripped down to virtually nothing except that which
 is necessary for 
 their parasitism.  Effectively, they are single goal
 entities -- the single 
 most dangerous type of entity possible.

Works for me. Just wanted to point out that saying ethics is entirely driven 
by evolution is not enough to communicate with precision what you mean by that.
 
 OK.  How about this . . . . Ethics is that behavior that,
 when shown by you, 
 makes me believe that I should facilitate your survival. 
 Obviously, it is 
 then to your (evolutionary) benefit to behave ethically.

Ethics can't be explained simply by examining interactions between individuals. 
It's an emergent dynamic that requires explanation at the group level. It's a 
set of culture-wide rules and taboos - how did they get there?
 
 Matt is decades out of date and needs to catch up on his
 reading.

Really? I must be out of date too then, since I agree with his explanation of 
ethics. I haven't read Hauser yet though, so maybe you're right.
 
 Ethics is *NOT* the result of group selection.  The
 *ethical evaluation of a 
 given action* is a meme and driven by the same social/group
 forces as any 
 other meme.  Rational memes when adopted by a group can
 enhance group 
 survival but . . . . there are also mechanisms by which
 seemingly irrational 
 memes can also enhance survival indirectly in *exactly* the
 same fashion as 
 the seemingly irrational tail displays of
 peacocks facilitates their group 
 survival by identifying the fittest individuals.  Note that
 it all depends 
 upon circumstances . . . .
 
 Ethics is first and foremost what society wants you to do. 
 But, society 
 can't be too pushy in it's demands or individuals
 will defect and society 
 will break down.  So, ethics turns into a matter of
 determining what is the 
 behavior that is best for society (and thus the individual)
 without unduly 
 burdening the individual (which would promote defection,
 cheating, etc.). 
 This behavior clearly differs based upon circumstances but,
 equally clearly, 
 should be able to be derived from a reasonably small set of
 rules that 
 *will* be context dependent.  Marc Hauser has done a lot of
 research and 
 human morality seems to be designed exactly that way (in
 terms of how it 
 varies across societies as if it is based upon fairly
 simple rules with a 
 small number of variables/variable settings.  I highly
 recommend his 
 writings (and being familiar with them is pretty much a
 necessity if you 
 want to have a decent advanced/current scientific
 discussion of ethics and 
 morals).
 
 Mark

I fail to see how your above explanation is anything but an elaboration of the 
idea that ethics is due to group selection. The following statements all 
support it: 
 - memes [rational or otherwise] when adopted by a group can enhance group 
survival
 - Ethics is first and foremost what society wants you to do.
 - ethics turns into a matter of determining what is the behavior that is best 
for society

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread Eric Burton
A succesful AGI should have n methods of data-mining its experience
for knowledge, I think. If it should have n ways of generating those
methods or n sets of ways to generate ways of generating those methods
etc I don't know.

On 8/28/08, j.k. [EMAIL PROTECTED] wrote:
 On 08/28/2008 04:47 PM, Matt Mahoney wrote:
 The premise is that if humans can create agents with above human
 intelligence, then so can they. What I am questioning is whether agents at
 any intelligence level can do this. I don't believe that agents at any
 level can recognize higher intelligence, and therefore cannot test their
 creations.

 The premise is not necessary to arrive at greater than human
 intelligence. If a human can create an agent of equal intelligence, it
 will rapidly become more intelligent (in practical terms) if advances in
 computing technologies continue to occur.

 An AGI with an intelligence the equivalent of a 99.-percentile human
 might be creatable, recognizable and testable by a human (or group of
 humans) of comparable intelligence. That same AGI at some later point in
 time, doing nothing differently except running 31 million times faster,
 will accomplish one genius-year of work every second. I would argue that
 by any sensible definition of intelligence, we would have a
 greater-than-human intelligence that was not created by a being of
 lesser intelligence.




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Mark Waser

OK.  How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your survival.
Obviously, it is
then to your (evolutionary) benefit to behave ethically.


Ethics can't be explained simply by examining interactions between 
individuals. It's an emergent dynamic that requires explanation at the 
group level. It's a set of culture-wide rules and taboos - how did they 
get there?


I wasn't explaining ethics with that statement.  I was identifying how 
evolution operates in social groups in such a way that I can derive ethics 
(in direct response to your question).


Ethics is a system.  The *definition of ethical behavior* for a given group 
is an emergent dynamic that requires explanation at the group level 
because it includes what the group believes and values -- but ethics (the 
system) does not require belief history (except insofar as it affects 
current belief).  History, circumstances, and understanding what a culture 
has the rules and taboos that they have is certainly useful for deriving 
more effective rules and taboos -- but it doesn't alter the underlying 
system which is quite simple . . . . being perceived as helpful generally 
improves your survival chances, being perceived as harmful generally 
decreases your survival chances (unless you are able to overpower the 
effect).


Really? I must be out of date too then, since I agree with his explanation 
of ethics. I haven't read Hauser yet though, so maybe you're right.


The specific phrase you cited was human collectives with certain taboos 
make the group as a whole more likely to persist.  The correct term of art 
for this is group selection and it has pretty much *NOT* been supported by 
scientific evidence and has fallen out of favor.


Matt also tends to conflate a number of ideas which should be separate which 
you seem to be doing as well.  There need to be distinctions between ethical 
systems, ethical rules, cultural variables, and evaluations of ethical 
behavior within a specific cultural context (i.e. the results of the system 
given certain rules -- which at the first-level seem to be reasonably 
standard -- with certain cultural variables as input).  Hauser's work 
identifies some of the common first-level rules and how cultural variables 
affect the results of those rules (and the derivation of secondary rules). 
It's good detailed, experiment-based stuff rather than the vague hand-waving 
that you're getting from armchair philosophers.


I fail to see how your above explanation is anything but an elaboration of 
the idea that ethics is due to group selection. The following statements 
all support it:
- memes [rational or otherwise] when adopted by a group can enhance group 
survival

- Ethics is first and foremost what society wants you to do.
- ethics turns into a matter of determining what is the behavior that is 
best for society


I think we're stumbling over your use of the term group selection  and 
what you mean by ethics is due to group selection.  Yes, the group 
selects the cultural variables that affect the results of the common 
ethical rules.  But group selection as a term of art in evolution 
generally meaning that the group itself is being selected or co-evolved --  
in this case, presumably by ethics -- which is *NOT* correct by current 
scientific understanding.  The first phrase that you quoted was intended to 
point out that both good and bad memes can positively affect survival -- so 
co-evolution doesn't work.  The second phrase that you quoted deals with the 
results of the system applying common ethical rules with cultural variables. 
The third phrase that you quoted talks about determining what the best 
cultural variables (and maybe secondary rules) are for a given set of 
circumstances -- and should have been better phrased as Improving ethical 
evaluations turns into a matter of determining . . . 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Eric Burton
I remember Richard Dawkins saying that group selection is a lie. Maybe
we shoud look past it now? It seems like a problem.

On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
 OK.  How about this . . . . Ethics is that behavior that,
 when shown by you,
 makes me believe that I should facilitate your survival.
 Obviously, it is
 then to your (evolutionary) benefit to behave ethically.

 Ethics can't be explained simply by examining interactions between
 individuals. It's an emergent dynamic that requires explanation at the
 group level. It's a set of culture-wide rules and taboos - how did they
 get there?

 I wasn't explaining ethics with that statement.  I was identifying how
 evolution operates in social groups in such a way that I can derive ethics
 (in direct response to your question).

 Ethics is a system.  The *definition of ethical behavior* for a given group
 is an emergent dynamic that requires explanation at the group level
 because it includes what the group believes and values -- but ethics (the
 system) does not require belief history (except insofar as it affects
 current belief).  History, circumstances, and understanding what a culture
 has the rules and taboos that they have is certainly useful for deriving
 more effective rules and taboos -- but it doesn't alter the underlying
 system which is quite simple . . . . being perceived as helpful generally
 improves your survival chances, being perceived as harmful generally
 decreases your survival chances (unless you are able to overpower the
 effect).

 Really? I must be out of date too then, since I agree with his explanation

 of ethics. I haven't read Hauser yet though, so maybe you're right.

 The specific phrase you cited was human collectives with certain taboos
 make the group as a whole more likely to persist.  The correct term of art
 for this is group selection and it has pretty much *NOT* been supported by
 scientific evidence and has fallen out of favor.

 Matt also tends to conflate a number of ideas which should be separate which
 you seem to be doing as well.  There need to be distinctions between ethical
 systems, ethical rules, cultural variables, and evaluations of ethical
 behavior within a specific cultural context (i.e. the results of the system
 given certain rules -- which at the first-level seem to be reasonably
 standard -- with certain cultural variables as input).  Hauser's work
 identifies some of the common first-level rules and how cultural variables
 affect the results of those rules (and the derivation of secondary rules).
 It's good detailed, experiment-based stuff rather than the vague hand-waving
 that you're getting from armchair philosophers.

 I fail to see how your above explanation is anything but an elaboration of

 the idea that ethics is due to group selection. The following statements
 all support it:
 - memes [rational or otherwise] when adopted by a group can enhance group

 survival
 - Ethics is first and foremost what society wants you to do.
 - ethics turns into a matter of determining what is the behavior that is
 best for society

 I think we're stumbling over your use of the term group selection  and
 what you mean by ethics is due to group selection.  Yes, the group
 selects the cultural variables that affect the results of the common
 ethical rules.  But group selection as a term of art in evolution
 generally meaning that the group itself is being selected or co-evolved --
 in this case, presumably by ethics -- which is *NOT* correct by current
 scientific understanding.  The first phrase that you quoted was intended to
 point out that both good and bad memes can positively affect survival -- so
 co-evolution doesn't work.  The second phrase that you quoted deals with the
 results of the system applying common ethical rules with cultural variables.
 The third phrase that you quoted talks about determining what the best
 cultural variables (and maybe secondary rules) are for a given set of
 circumstances -- and should have been better phrased as Improving ethical
 evaluations turns into a matter of determining . . . 




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread Abram Demski
I like that argument.

Also, it is clear that humans can invent better algorithms to do
specialized things. Even if an AGI couldn't think up better versions
of itself, it would be able to do the equivalent of equipping itself
with fancy calculators.

--Abram

On Thu, Aug 28, 2008 at 9:04 PM, j.k. [EMAIL PROTECTED] wrote:
 On 08/28/2008 04:47 PM, Matt Mahoney wrote:

 The premise is that if humans can create agents with above human
 intelligence, then so can they. What I am questioning is whether agents at
 any intelligence level can do this. I don't believe that agents at any level
 can recognize higher intelligence, and therefore cannot test their
 creations.

 The premise is not necessary to arrive at greater than human intelligence.
 If a human can create an agent of equal intelligence, it will rapidly become
 more intelligent (in practical terms) if advances in computing technologies
 continue to occur.

 An AGI with an intelligence the equivalent of a 99.-percentile human
 might be creatable, recognizable and testable by a human (or group of
 humans) of comparable intelligence. That same AGI at some later point in
 time, doing nothing differently except running 31 million times faster, will
 accomplish one genius-year of work every second. I would argue that by any
 sensible definition of intelligence, we would have a greater-than-human
 intelligence that was not created by a being of lesser intelligence.




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Mark Waser
Group selection (as used as the term of art in evolutionary biology) does 
not seem to be experimentally supported (and there have been a lot of recent 
experiments looking for such an effect).


It would be nice if people could let the idea drop unless there is actually 
some proof for it other than it seems to make sense that . . . . 


- Original Message - 
From: Eric Burton [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, August 29, 2008 12:56 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




I remember Richard Dawkins saying that group selection is a lie. Maybe
we shoud look past it now? It seems like a problem.

On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:

OK.  How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your survival.
Obviously, it is
then to your (evolutionary) benefit to behave ethically.


Ethics can't be explained simply by examining interactions between
individuals. It's an emergent dynamic that requires explanation at the
group level. It's a set of culture-wide rules and taboos - how did they
get there?


I wasn't explaining ethics with that statement.  I was identifying how
evolution operates in social groups in such a way that I can derive 
ethics

(in direct response to your question).

Ethics is a system.  The *definition of ethical behavior* for a given 
group

is an emergent dynamic that requires explanation at the group level
because it includes what the group believes and values -- but ethics (the
system) does not require belief history (except insofar as it affects
current belief).  History, circumstances, and understanding what a 
culture

has the rules and taboos that they have is certainly useful for deriving
more effective rules and taboos -- but it doesn't alter the underlying
system which is quite simple . . . . being perceived as helpful generally
improves your survival chances, being perceived as harmful generally
decreases your survival chances (unless you are able to overpower the
effect).

Really? I must be out of date too then, since I agree with his 
explanation


of ethics. I haven't read Hauser yet though, so maybe you're right.


The specific phrase you cited was human collectives with certain taboos
make the group as a whole more likely to persist.  The correct term of 
art
for this is group selection and it has pretty much *NOT* been supported 
by

scientific evidence and has fallen out of favor.

Matt also tends to conflate a number of ideas which should be separate 
which
you seem to be doing as well.  There need to be distinctions between 
ethical

systems, ethical rules, cultural variables, and evaluations of ethical
behavior within a specific cultural context (i.e. the results of the 
system

given certain rules -- which at the first-level seem to be reasonably
standard -- with certain cultural variables as input).  Hauser's work
identifies some of the common first-level rules and how cultural 
variables
affect the results of those rules (and the derivation of secondary 
rules).
It's good detailed, experiment-based stuff rather than the vague 
hand-waving

that you're getting from armchair philosophers.

I fail to see how your above explanation is anything but an elaboration 
of


the idea that ethics is due to group selection. The following statements
all support it:
- memes [rational or otherwise] when adopted by a group can enhance 
group


survival
- Ethics is first and foremost what society wants you to do.
- ethics turns into a matter of determining what is the behavior that 
is

best for society


I think we're stumbling over your use of the term group selection  and
what you mean by ethics is due to group selection.  Yes, the group
selects the cultural variables that affect the results of the common
ethical rules.  But group selection as a term of art in evolution
generally meaning that the group itself is being selected or 
co-evolved --

in this case, presumably by ethics -- which is *NOT* correct by current
scientific understanding.  The first phrase that you quoted was intended 
to
point out that both good and bad memes can positively affect survival --  
so
co-evolution doesn't work.  The second phrase that you quoted deals with 
the
results of the system applying common ethical rules with cultural 
variables.

The third phrase that you quoted talks about determining what the best
cultural variables (and maybe secondary rules) are for a given set of
circumstances -- and should have been better phrased as Improving 
ethical

evaluations turns into a matter of determining . . . 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com





Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Charles Hixson
Dawkins tends to see an truth, and then overstate it.  What he says 
isn't usually exactly wrong, so much as one-sided.  This may be an 
exception.


Some meanings of group selection don't appear to map onto reality.  
Others map very weakly.  Some have reasonable explanatory power.  If you 
don't define with precision which meaning you are using, then you invite 
confusion.  As such, it's a term that it's better not to use.


But I wouldn't usually call it a lie.  Merely a mistake.  The exact 
nature of the mistake depend on precisely what you mean, and the context 
within which you are using it.  Often it's merely a signal that you are 
confused and don't KNOW precisely what you are talking about, but merely 
the general ball park within which you believe it lies.  Only rarely is 
it intentionally used to confuse things with malice intended.  In that 
final case the term lie is appropriate.  Otherwise it's merely 
inadvisable usage.


Eric Burton wrote:

I remember Richard Dawkins saying that group selection is a lie. Maybe
we shoud look past it now? It seems like a problem.

On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
  

OK.  How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your survival.
Obviously, it is
then to your (evolutionary) benefit to behave ethically.


Ethics can't be explained simply by examining interactions between
individuals. It's an emergent dynamic that requires explanation at the
group level. It's a set of culture-wide rules and taboos - how did they
get there?
  

I wasn't explaining ethics with that statement.  I was identifying how
evolution operates in social groups in such a way that I can derive ethics
(in direct response to your question).

Ethics is a system.  The *definition of ethical behavior* for a given group
is an emergent dynamic that requires explanation at the group level
because it includes what the group believes and values -- but ethics (the
system) does not require belief history (except insofar as it affects
current belief).  History, circumstances, and understanding what a culture
has the rules and taboos that they have is certainly useful for deriving
more effective rules and taboos -- but it doesn't alter the underlying
system which is quite simple . . . . being perceived as helpful generally
improves your survival chances, being perceived as harmful generally
decreases your survival chances (unless you are able to overpower the
effect).



Really? I must be out of date too then, since I agree with his explanation

of ethics. I haven't read Hauser yet though, so maybe you're right.
  

The specific phrase you cited was human collectives with certain taboos
make the group as a whole more likely to persist.  The correct term of art
for this is group selection and it has pretty much *NOT* been supported by
scientific evidence and has fallen out of favor.

Matt also tends to conflate a number of ideas which should be separate which
you seem to be doing as well.  There need to be distinctions between ethical
systems, ethical rules, cultural variables, and evaluations of ethical
behavior within a specific cultural context (i.e. the results of the system
given certain rules -- which at the first-level seem to be reasonably
standard -- with certain cultural variables as input).  Hauser's work
identifies some of the common first-level rules and how cultural variables
affect the results of those rules (and the derivation of secondary rules).
It's good detailed, experiment-based stuff rather than the vague hand-waving
that you're getting from armchair philosophers.



I fail to see how your above explanation is anything but an elaboration of

the idea that ethics is due to group selection. The following statements
all support it:
- memes [rational or otherwise] when adopted by a group can enhance group

survival
- Ethics is first and foremost what society wants you to do.
- ethics turns into a matter of determining what is the behavior that is
best for society
  

I think we're stumbling over your use of the term group selection  and
what you mean by ethics is due to group selection.  Yes, the group
selects the cultural variables that affect the results of the common
ethical rules.  But group selection as a term of art in evolution
generally meaning that the group itself is being selected or co-evolved --
in this case, presumably by ethics -- which is *NOT* correct by current
scientific understanding.  The first phrase that you quoted was intended to
point out that both good and bad memes can positively affect survival -- so
co-evolution doesn't work.  The second phrase that you quoted deals with the
results of the system applying common ethical rules with cultural variables.
The third phrase that you quoted talks about determining what the best
cultural variables (and maybe secondary rules) are for a given set of
circumstances -- and should have been better phrased as 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Matt Mahoney
Group selection is not dead, just weaker than individual selection. Altruism in 
many species is evidence for its existence. 
http://en.wikipedia.org/wiki/Group_selection

In any case, evolution of culture and ethics in humans is primarily memetic, 
not genetic. Taboos against nudity are nearly universal among cultures with 
language, yet unique to homo sapiens.

You might believe that certain practices are intrinsically good or bad, not the 
result of group selection. Fine. That is how your beliefs are supposed to work.

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, August 29, 2008 1:13:43 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))

Group selection (as used as the term of art in evolutionary biology) does 
not seem to be experimentally supported (and there have been a lot of recent 
experiments looking for such an effect).

It would be nice if people could let the idea drop unless there is actually 
some proof for it other than it seems to make sense that . . . . 

- Original Message - 
From: Eric Burton [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, August 29, 2008 12:56 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))


I remember Richard Dawkins saying that group selection is a lie. Maybe
 we shoud look past it now? It seems like a problem.

 On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
 OK.  How about this . . . . Ethics is that behavior that,
 when shown by you,
 makes me believe that I should facilitate your survival.
 Obviously, it is
 then to your (evolutionary) benefit to behave ethically.

 Ethics can't be explained simply by examining interactions between
 individuals. It's an emergent dynamic that requires explanation at the
 group level. It's a set of culture-wide rules and taboos - how did they
 get there?

 I wasn't explaining ethics with that statement.  I was identifying how
 evolution operates in social groups in such a way that I can derive 
 ethics
 (in direct response to your question).

 Ethics is a system.  The *definition of ethical behavior* for a given 
 group
 is an emergent dynamic that requires explanation at the group level
 because it includes what the group believes and values -- but ethics (the
 system) does not require belief history (except insofar as it affects
 current belief).  History, circumstances, and understanding what a 
 culture
 has the rules and taboos that they have is certainly useful for deriving
 more effective rules and taboos -- but it doesn't alter the underlying
 system which is quite simple . . . . being perceived as helpful generally
 improves your survival chances, being perceived as harmful generally
 decreases your survival chances (unless you are able to overpower the
 effect).

 Really? I must be out of date too then, since I agree with his 
 explanation

 of ethics. I haven't read Hauser yet though, so maybe you're right.

 The specific phrase you cited was human collectives with certain taboos
 make the group as a whole more likely to persist.  The correct term of 
 art
 for this is group selection and it has pretty much *NOT* been supported 
 by
 scientific evidence and has fallen out of favor.

 Matt also tends to conflate a number of ideas which should be separate 
 which
 you seem to be doing as well.  There need to be distinctions between 
 ethical
 systems, ethical rules, cultural variables, and evaluations of ethical
 behavior within a specific cultural context (i.e. the results of the 
 system
 given certain rules -- which at the first-level seem to be reasonably
 standard -- with certain cultural variables as input).  Hauser's work
 identifies some of the common first-level rules and how cultural 
 variables
 affect the results of those rules (and the derivation of secondary 
 rules).
 It's good detailed, experiment-based stuff rather than the vague 
 hand-waving
 that you're getting from armchair philosophers.

 I fail to see how your above explanation is anything but an elaboration 
 of

 the idea that ethics is due to group selection. The following statements
 all support it:
 - memes [rational or otherwise] when adopted by a group can enhance 
 group

 survival
 - Ethics is first and foremost what society wants you to do.
 - ethics turns into a matter of determining what is the behavior that 
 is
 best for society

 I think we're stumbling over your use of the term group selection  and
 what you mean by ethics is due to group selection.  Yes, the group
 selects the cultural variables that affect the results of the common
 ethical rules.  But group selection as a term of art in evolution
 generally meaning that the group itself is being selected or 
 co-evolved --
 in this case, presumably by ethics -- which is *NOT* correct by current
 scientific understanding.  

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.

On 08/29/2008 10:09 AM, Abram Demski wrote:

I like that argument.

Also, it is clear that humans can invent better algorithms to do
specialized things. Even if an AGI couldn't think up better versions
of itself, it would be able to do the equivalent of equipping itself
with fancy calculators.

--Abram

   


Exactly. A better transistor or a lower complexity algorithm for a 
computational bottleneck in an AGI (and implementing such) is a 
self-improvement that improves the AGI's ability to make further 
improvements -- i.e., RSI.


Likewise, it is not inconceivable that we will soon be able to improve 
human intelligence by means such as increasing neural signaling speed 
(assuming the increase doesn't have too many negative effects, which it 
might) and improving other *individual* aspects of brain biology. This 
would be RSI, too.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread William Pearson
2008/8/29 j.k. [EMAIL PROTECTED]:
 On 08/28/2008 04:47 PM, Matt Mahoney wrote:

 The premise is that if humans can create agents with above human
 intelligence, then so can they. What I am questioning is whether agents at
 any intelligence level can do this. I don't believe that agents at any level
 can recognize higher intelligence, and therefore cannot test their
 creations.

 The premise is not necessary to arrive at greater than human intelligence.
 If a human can create an agent of equal intelligence, it will rapidly become
 more intelligent (in practical terms) if advances in computing technologies
 continue to occur.

 An AGI with an intelligence the equivalent of a 99.-percentile human
 might be creatable, recognizable and testable by a human (or group of
 humans) of comparable intelligence. That same AGI at some later point in
 time, doing nothing differently except running 31 million times faster, will
 accomplish one genius-year of work every second.

Will it? It might be starved for lack of interaction with the world
and other intelligences, and so be a lot less productive than
something working at normal speeds.

Most learning systems aren't constrained by lack of processing power
for how long it takes them to learn things (AIXI excepted), but by the
speed of running an experiment.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread Matt Mahoney
It seems that the debate over recursive self improvement depends on what you 
mean by improvement. If you define improvement as intelligence as defined by 
the Turing test, then RSI is not possible because the Turing test does not test 
for superhuman intelligence. If you mean improvement as more memory, faster 
clock speed, more network bandwidth, etc., then yes, I think it is reasonable 
to expect Moore's law to continue after we are all uploaded. If you mean 
improvement in the sense of competitive fitness, then yes, I expect evolution 
to continue, perhaps very rapidly if it is based on a computing substrate other 
than DNA. Whether you can call it self improvement or whether the result is 
desirable is debatable. We are, after all, pondering the extinction of Homo 
Sapiens and replacing it with some unknown species, perhaps gray goo. Will the 
nanobots look back at this as an improvement, the way we view the extinction of 
Homo Erectus?

My question is whether RSI is mathematically possible in the context of 
universal intelligence, i.e. expected reward or prediction accuracy over a 
Solomonoff distribution of computable environments. I believe it is possible 
for Turing machines if and only if they have access to true random sources so 
that each generation can create successively more complex test environments to 
evaluate their offspring. But this is troubling because in practice we can 
construct pseudo-random sources that are nearly indistinguishable from truly 
random in polynomial time (but none that are *provably* so).

 -- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.

On 08/29/2008 01:29 PM, William Pearson wrote:

2008/8/29 j.k.[EMAIL PROTECTED]:
   

An AGI with an intelligence the equivalent of a 99.-percentile human
might be creatable, recognizable and testable by a human (or group of
humans) of comparable intelligence. That same AGI at some later point in
time, doing nothing differently except running 31 million times faster, will
accomplish one genius-year of work every second.
 


Will it? It might be starved for lack of interaction with the world
and other intelligences, and so be a lot less productive than
something working at normal speeds.

   


Yes, you're right. It doesn't follow that its productivity will 
necessarily scale linearly, but the larger point I was trying to make 
was that it would be much faster and that being much faster would 
represent an improvement that improves its ability to make future 
improvements.


The numbers are unimportant, but I'd argue that even if there were just 
one such human-level AGI running 1 million times normal speed and even 
if it did require regular interaction just like most humans do, that it 
would still be hugely productive and would represent a phase-shift in 
intelligence in terms of what it accomplishes. Solving one difficult 
problem is probably not highly parallelizable in general (many are not 
at all parallelizable), but solving tens of thousands of such problems 
across many domains over the course of a year or so probably is. The 
human-level AGI running a million times faster could simultaneously 
interact with tens of thousands of scientists at their pace, so there is 
no reason to believe it need be starved for interaction to the point 
that its productivity would be limited to near human levels of 
productivity.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread William Pearson
2008/8/29 j.k. [EMAIL PROTECTED]:
 On 08/29/2008 01:29 PM, William Pearson wrote:

 2008/8/29 j.k.[EMAIL PROTECTED]:


 An AGI with an intelligence the equivalent of a 99.-percentile human
 might be creatable, recognizable and testable by a human (or group of
 humans) of comparable intelligence. That same AGI at some later point in
 time, doing nothing differently except running 31 million times faster,
 will
 accomplish one genius-year of work every second.


 Will it? It might be starved for lack of interaction with the world
 and other intelligences, and so be a lot less productive than
 something working at normal speeds.



 Yes, you're right. It doesn't follow that its productivity will necessarily
 scale linearly, but the larger point I was trying to make was that it would
 be much faster and that being much faster would represent an improvement
 that improves its ability to make future improvements.

 The numbers are unimportant, but I'd argue that even if there were just one
 such human-level AGI running 1 million times normal speed and even if it did
 require regular interaction just like most humans do, that it would still be
 hugely productive and would represent a phase-shift in intelligence in terms
 of what it accomplishes. Solving one difficult problem is probably not
 highly parallelizable in general (many are not at all parallelizable), but
 solving tens of thousands of such problems across many domains over the
 course of a year or so probably is. The human-level AGI running a million
 times faster could simultaneously interact with tens of thousands of
 scientists at their pace, so there is no reason to believe it need be
 starved for interaction to the point that its productivity would be limited
 to near human levels of productivity.

Only if it had millions of times normal human storage capacity and
memory bandwidth, else it couldn't keep track of all the
conversations, and sufficient bandwidth for ten thousand VOIP calls at
once.

We should perhaps clarify what you mean by speed here? The speed of
the transistor is not all of what makes a system useful. It is worth
noting that processor speed hasn't gone up appreciably from the heady
days of Pentium 4s with 3.8 GHZ in 2005.

Improvements have come from other directions (better memory bandwidth,
better pipelines and multi cores). The hard disk is probably what is
holding back current computers at the moment.


  Will Pearson





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.

On 08/29/2008 03:14 PM, William Pearson wrote:

2008/8/29 j.k.[EMAIL PROTECTED]:

... The human-level AGI running a million
times faster could simultaneously interact with tens of thousands of
scientists at their pace, so there is no reason to believe it need be
starved for interaction to the point that its productivity would be limited
to near human levels of productivity.

 

Only if it had millions of times normal human storage capacity and
memory bandwidth, else it couldn't keep track of all the
conversations, and sufficient bandwidth for ten thousand VOIP calls at
once.
   
And sufficient electricity, etc. There are many other details that would 
have to be spelled out if we were trying to give an exhaustive list of 
every possible requirement. But the point remains that *if* the 
technological advances that we expect to occur actually do occur, then 
there will be greater-than-human intelligence that was created by 
human-level intelligence -- unless one thinks that memory capacity, chip 
design and throughput, disk, system, and network bandwidth, etc., are 
close to as good as they'll ever get. On the contrary, there are more 
promising new technologies on the horizon than one can keep track of 
(not to mention current technologies that can still be improved), which 
makes it extremely unlikely that any of these or the other relevant 
factors are close to practical maximums.

We should perhaps clarify what you mean by speed here? The speed of
the transistor is not all of what makes a system useful. It is worth
noting that processor speed hasn't gone up appreciably from the heady
days of Pentium 4s with 3.8 GHZ in 2005.

Improvements have come from other directions (better memory bandwidth,
better pipelines and multi cores).
I didn't believe that we could drop a 3 THz chip (if that were 
physically possible) onto an existing motherboard and it would scale 
linearly or that a better transistor would be the *only* improvement 
that occurs. When I said 31 million times faster, I meant the system 
as a whole would be 31 million times faster at achieving its 
computational goals. This will obviously require many improvements in 
processor design, system architecture, memory, bandwidth, physics  
materials sciences, and others, but the scenario I was trying to discuss 
was one in which these sorts of things will have occurred.


This is getting quite far off topic from the point I was trying to make 
originally, so I'll bow out of this discussion now.


j.k.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Valentina Poletti
Got ya, thanks for the clarification. That brings up another question. Why
do we want to make an AGI?



On 8/27/08, Matt Mahoney [EMAIL PROTECTED] wrote:

  An AGI will not design its goals. It is up to humans to define the goals
 of an AGI, so that it will do what we want it to do.

 Unfortunately, this is a problem. We may or may not be successful in
 programming the goals of AGI to satisfy human goals. If we are not
 successful, then AGI will be useless at best and dangerous at worst. If we
 are successful, then we are doomed because human goals evolved in a
 primitive environment to maximize reproductive success and not in an
 environment where advanced technology can give us whatever we want. AGI will
 allow us to connect our brains to simulated worlds with magic genies, or
 worse, allow us to directly reprogram our brains to alter our memories,
 goals, and thought processes. All rational goal-seeking agents must have a
 mental state of maximum utility where any thought or perception would be
 unpleasant because it would result in a different state.

 -- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-28 Thread Valentina Poletti
Lol..it's not that impossible actually.


On Tue, Aug 26, 2008 at 6:32 PM, Mike Tintner [EMAIL PROTECTED]wrote:

  Valentina:In other words I'm looking for a way to mathematically define
 how the AGI will mathematically define its goals.
 Holy Non-Existent Grail? Has  any new branch of logic or mathematics ever
 been logically or mathematically (axiomatically) derivable from any old
 one?  e.g. topology,  Riemannian geometry, complexity theory, fractals,
 free-form deformation  etc etc
  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser
No, the state of ultimate bliss that you, I, and all other rational, goal 
seeking agents seek


Your second statement copied below not withstanding, I *don't* seek ultimate 
bliss.


You may say that is not what you want, but only because you are unaware of 
the possibilities of reprogramming your brain. It is like being opposed to 
drugs or wireheading. Once you experience it, you can't resist.


It is not what I want *NOW*.  It may be that once my brain has been altered 
by experiencing it, I may want it *THEN* but that has no relevance to what I 
want and seek now.


These statements are just sloppy reasoning . . . .


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 11:05 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))




Mark Waser [EMAIL PROTECTED] wrote:


What if the utility of the state decreases the longer that you are in it
(something that is *very* true of human

beings)?
If you are aware of the passage of time, then you are not staying in the
same state.


I have to laugh.  So you agree that all your arguments don't apply to
anything that is aware of the passage of time?  That makes them really
useful, doesn't it.


No, the state of ultimate bliss that you, I, and all other rational, goal 
seeking agents seek is a mental state in which nothing perceptible 
happens. Without thought or sensation, you would be unaware of the passage 
of time, or of anything else. If you are aware of time then you are either 
not in this state yet, or are leaving it.


You may say that is not what you want, but only because you are unaware of 
the possibilities of reprogramming your brain. It is like being opposed to 
drugs or wireheading. Once you experience it, you can't resist.


-- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Abram Demski
Mark,

I second that!

Matt,

This is like my imaginary robot that rewires its video feed to be
nothing but tan, to stimulate the pleasure drive that humans put there
to make it like humans better.

If we have any external goals at all, the state of bliss you refer to
prevents us from achieving them. Knowing this, we do not want to enter
that state.

--Abram Demski

On Thu, Aug 28, 2008 at 9:18 AM, Mark Waser [EMAIL PROTECTED] wrote:
 No, the state of ultimate bliss that you, I, and all other rational, goal
 seeking agents seek

 Your second statement copied below not withstanding, I *don't* seek ultimate
 bliss.

 You may say that is not what you want, but only because you are unaware of
 the possibilities of reprogramming your brain. It is like being opposed to
 drugs or wireheading. Once you experience it, you can't resist.

 It is not what I want *NOW*.  It may be that once my brain has been altered
 by experiencing it, I may want it *THEN* but that has no relevance to what I
 want and seek now.

 These statements are just sloppy reasoning . . . .


 - Original Message - From: Matt Mahoney [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Wednesday, August 27, 2008 11:05 PM
 Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was
 Re: [agi] The Necessity of Embodiment))


 Mark Waser [EMAIL PROTECTED] wrote:

 What if the utility of the state decreases the longer that you are in
 it
 (something that is *very* true of human

 beings)?
 If you are aware of the passage of time, then you are not staying in the
 same state.

 I have to laugh.  So you agree that all your arguments don't apply to
 anything that is aware of the passage of time?  That makes them really
 useful, doesn't it.

 No, the state of ultimate bliss that you, I, and all other rational, goal
 seeking agents seek is a mental state in which nothing perceptible happens.
 Without thought or sensation, you would be unaware of the passage of time,
 or of anything else. If you are aware of time then you are either not in
 this state yet, or are leaving it.

 You may say that is not what you want, but only because you are unaware of
 the possibilities of reprogramming your brain. It is like being opposed to
 drugs or wireheading. Once you experience it, you can't resist.

 -- Matt Mahoney, [EMAIL PROTECTED]


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Abram Demski
Matt,

Ok, you have me, I admit defeat.

I could only continue my argument if I could pin down what sorts of
facts need to be learned with high probability for RSI, and show
somehow that this set does not include unlearnable facts. Learnable
facts form a larger set than provable facts, since for example we can
probabilistically declare that a program never halts if we run it for
a while and it doesn't. But there are certain facts that are not even
probabilistically learnable, so until I can show that none of these
are absolutely essential to RSI, I concede.

--Abram Demski

On Wed, Aug 27, 2008 at 6:48 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Abram Demski [EMAIL PROTECTED] wrote:
First, I do not think it is terribly difficult to define a Goedel
machine that does not halt. It interacts with its environment, and
there is some utility value attached to this interaction, and it
attempts to rewrite its code to maximize this utility.

 It's not that the machine halts, but that it makes no further improvements 
 once the best solution is found. This might not be a practical concern if the 
 environment is very complex.

 However, I doubt that a Goedel machine could even be built. Legg showed [1] 
 that Goedel incompleteness is ubiquitous. To paraphrase, beyond some low 
 level of complexity, you can't prove anything. Perhaps this is the reason we 
 have not (AFAIK) built a software model, even for very simple sets of axioms.

 If we resort to probabilistic evidence of improvement rather than proofs, 
 then it is no longer a Goedel machine, and I think we would need experimental 
 verification of RSI. Random modifications of code are much more likely to be 
 harmful than helpful, so we would need to show that improvements could be 
 detected with a very low false positive rate.

 1. http://www.vetta.org/documents/IDSIA-12-06-1.pdf


  -- Matt Mahoney, [EMAIL PROTECTED]



 - Original Message 
 From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Wednesday, August 27, 2008 11:40:24 AM
 Subject: Re: Goedel machines (was Re: Information theoretic approaches to AGI 
 (was Re: [agi] The Necessity of Embodiment))

 Matt,

 Thanks for the reply. There are 3 reasons that I can think of for
 calling Goedel machines bounded:

 1. As you assert, once a solution is found, it stops.
 2. It will be on a finite computer, so it will eventually reach the
 one best version of itself that it can reach.
 3. It can only make provably correct steps, which is very limiting
 thanks to Godel's incompleteness theorem.

 I'll try to argue that each of these limits can be overcome in
 principle, and we'll see if the result satisfies your RSI criteria.

 First, I do not think it is terribly difficult to define a Goedel
 machine that does not halt. It interacts with its environment, and
 there is some utility value attached to this interaction, and it
 attempts to rewrite its code to maximize this utility.

 The second and third need to be tackled together, because the main
 reason that a Goedel machine can't improve its own hardware is because
 there is uncertainty involved, so it would never be provably better.
 There is always some chance of hardware malfunction. So, I think it is
 necessary to grant the possibility of modifications that are merely
 very probably correct. Once this is done, 2 and 3 fall fairly easily,
 assuming that the machine begins life with a good probabilistic
 learning system. That is a big assumption, but we can grant it for the
 moment I think?

 For the sake of concreteness, let's say that the utility value is some
 (probably very complex) attempt to logically describe Eliezer-style
 Friendliness, and that the probabilistic learning system is an
 approximation of AIXI (which the system will improve over time along
 with everything else). (These two choices don't reflect my personal
 tastes, they are just examples.)

 By tweaking the allowances the system makes, we might either have a
 slow self-improver that is, say, 99.999% probable to only improve
 itself in the next 100 years, or a faster self-improver that is 50%
 guaranteed.

 Does this satisfy your criteria?

 On Wed, Aug 27, 2008 at 9:14 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Abram Demski [EMAIL PROTECTED] wrote:
Matt,

What is your opinion on Goedel machines?

 http://www.idsia.ch/~juergen/goedelmachine.html


 Thanks for the link. If I understand correctly, this is a form of bounded 
 RSI, so it could not lead to a singularity. A Goedel machine is functionally 
 equivalent to AIXI^tl in that it finds the optimal reinforcement learning 
 solution given a fixed environment and utility function. The difference is 
 that AIXI^tl does a brute force search of all machines up to length l for 
 time t each, so it run in O(t 2^l) time. A Goedel machine achieves the same 
 result more efficiently through a series of self improvments by proving that 
 each proposed modification (including modifications to its own proof search 
 code) is 

Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Abram Demski
PS-- I have thought of a weak argument:

If a fact is not probabilistically learnable, then it is hard to see
how it has much significance for an AI design. A non-learnable fact
won't reliably change the performance of the AI, since if it did, it
would be learnable. Furthermore, even if there were *some* important
nonlearnable facts, it still seems like significant self-improvements
could be made using only probabilistically learned facts. And, since
the amount of time spent testing cases is a huge factor, RSI will not
stop except due to limited memory, even in a relatively boring
environment (unless the AI makes a rational decision to stop using
resources on RSI since it has found a solution that is probably
optimal).

On Thu, Aug 28, 2008 at 11:25 AM, Abram Demski [EMAIL PROTECTED] wrote:
 Matt,

 Ok, you have me, I admit defeat.

 I could only continue my argument if I could pin down what sorts of
 facts need to be learned with high probability for RSI, and show
 somehow that this set does not include unlearnable facts. Learnable
 facts form a larger set than provable facts, since for example we can
 probabilistically declare that a program never halts if we run it for
 a while and it doesn't. But there are certain facts that are not even
 probabilistically learnable, so until I can show that none of these
 are absolutely essential to RSI, I concede.

 --Abram Demski

 On Wed, Aug 27, 2008 at 6:48 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Abram Demski [EMAIL PROTECTED] wrote:
First, I do not think it is terribly difficult to define a Goedel
machine that does not halt. It interacts with its environment, and
there is some utility value attached to this interaction, and it
attempts to rewrite its code to maximize this utility.

 It's not that the machine halts, but that it makes no further improvements 
 once the best solution is found. This might not be a practical concern if 
 the environment is very complex.

 However, I doubt that a Goedel machine could even be built. Legg showed [1] 
 that Goedel incompleteness is ubiquitous. To paraphrase, beyond some low 
 level of complexity, you can't prove anything. Perhaps this is the reason we 
 have not (AFAIK) built a software model, even for very simple sets of axioms.

 If we resort to probabilistic evidence of improvement rather than proofs, 
 then it is no longer a Goedel machine, and I think we would need 
 experimental verification of RSI. Random modifications of code are much more 
 likely to be harmful than helpful, so we would need to show that 
 improvements could be detected with a very low false positive rate.

 1. http://www.vetta.org/documents/IDSIA-12-06-1.pdf


  -- Matt Mahoney, [EMAIL PROTECTED]



 - Original Message 
 From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Wednesday, August 27, 2008 11:40:24 AM
 Subject: Re: Goedel machines (was Re: Information theoretic approaches to 
 AGI (was Re: [agi] The Necessity of Embodiment))

 Matt,

 Thanks for the reply. There are 3 reasons that I can think of for
 calling Goedel machines bounded:

 1. As you assert, once a solution is found, it stops.
 2. It will be on a finite computer, so it will eventually reach the
 one best version of itself that it can reach.
 3. It can only make provably correct steps, which is very limiting
 thanks to Godel's incompleteness theorem.

 I'll try to argue that each of these limits can be overcome in
 principle, and we'll see if the result satisfies your RSI criteria.

 First, I do not think it is terribly difficult to define a Goedel
 machine that does not halt. It interacts with its environment, and
 there is some utility value attached to this interaction, and it
 attempts to rewrite its code to maximize this utility.

 The second and third need to be tackled together, because the main
 reason that a Goedel machine can't improve its own hardware is because
 there is uncertainty involved, so it would never be provably better.
 There is always some chance of hardware malfunction. So, I think it is
 necessary to grant the possibility of modifications that are merely
 very probably correct. Once this is done, 2 and 3 fall fairly easily,
 assuming that the machine begins life with a good probabilistic
 learning system. That is a big assumption, but we can grant it for the
 moment I think?

 For the sake of concreteness, let's say that the utility value is some
 (probably very complex) attempt to logically describe Eliezer-style
 Friendliness, and that the probabilistic learning system is an
 approximation of AIXI (which the system will improve over time along
 with everything else). (These two choices don't reflect my personal
 tastes, they are just examples.)

 By tweaking the allowances the system makes, we might either have a
 slow self-improver that is, say, 99.999% probable to only improve
 itself in the next 100 years, or a faster self-improver that is 50%
 guaranteed.

 Does this satisfy your criteria?

 On Wed, Aug 27, 2008 at 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser

Also, I should mention that the whole construction becomes irrelevant
if we can logically describe the goal ahead of time. With the make
humans happy example, something like my construction would be useful
if we need to AI to *learn* what a human is and what happy is. (We
then set up the pleasure in a way that would help the AI attach
goodness to the right things.) If we are able to write out the
definitions ahead of time, we can directly specify what goodness is
instead. But, I think it is unrealistic to take that approach, since
the definitions would be large and difficult


:-)  I strongly disagree with you.  Why do you believe that having a new AI 
learn large and difficult definitions is going to be easier and safer than 
specifying them (assuming that the specifications can be grounded in the 
AI's terms)?


I also disagree that the definitions are going to be as large as people 
believe them to be . . . .


Let's take the Mandelbroit set as an example.  It is perfectly specified by 
one *very* small formula.  Yet, if you don't know that formula, you could 
spend many lifetimes characterizing it (particularly if you're trying to 
doing it from multiple blurred and shifted  images :-).


The true problem is that humans can't (yet) agree on what goodness is -- and 
then they get lost arguing over detailed cases instead of focusing on the 
core.


The core definition of goodness/morality and developing a system to 
determine what actions are good and what actions are not is a project that 
I've been working on for quite some time and I *think* I'm making rather 
good headway.



- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 9:57 AM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Hi mark,

I think the miscommunication is relatively simple...

On Wed, Aug 27, 2008 at 10:14 PM, Mark Waser [EMAIL PROTECTED] wrote:

Hi,

  I think that I'm missing some of your points . . . .


Whatever good is, it cannot be something directly
observable, or the AI will just wirehead itself (assuming it gets
intelligent enough to do so, of course).


I don't understand this unless you mean by directly observable that the
definition is observable and changeable.  If I define good as making all
humans happy without modifying them, how would the AI wirehead itself? 
What

am I missing here?


When I say directly observable, I mean observable-by-sensation.
Making all humans happy could not be directly observed unless the AI
had sensors in the pleasure centers of all humans (in which case it
would want to wirehead us). Without modifying them couldn't be
directly observed even then. So, realistically, such a goal needs to
be inferred from sensory data.

Also, I should mention that the whole construction becomes irrelevant
if we can logically describe the goal ahead of time. With the make
humans happy example, something like my construction would be useful
if we need to AI to *learn* what a human is and what happy is. (We
then set up the pleasure in a way that would help the AI attach
goodness to the right things.) If we are able to write out the
definitions ahead of time, we can directly specify what goodness is
instead. But, I think it is unrealistic to take that approach, since
the definitions would be large and difficult




So, the AI needs to have a concept of external goodness, with a weak
probabilistic correlation to its directly observable pleasure.


I agree with the concept of external goodness but why does the 
correlation

between external goodness and it's pleasure have to be low?  Why can't
external goodness directly cause pleasure?  Clearly, it shouldn't believe
that it's pleasure causes external goodness (that would be reversing 
cause

and effect and an obvious logic error).


The correlation needs to be fairly low to allow the concept of good to
eventually split off of the concept of pleasure in the AI mind. The
external goodness can't directly cause pleasure because it isn't
directly detectable. Detection of goodness *through* inference *could*
be taken to cause pleasure; but this wouldn't be much use, because the
AI is already supposed to be maximizing goodness, not pleasure.
Pleasure merely plays the role of offering hints about what things
in the world might be good.

Actually, I think the proper probabilistic construction might be a bit
different than simply a weak correlation... for one thing, the
probability that goodness causes pleasure shouldn't be set ahead of
time. I'm thinking that likelihood would be more appropriate than
probability... so that it is as if the AI is born with some evidence
for the correlation that it cannot remember, but uses in reasoning (if
you are familiar with the idea of virtual evidence that is what I am
talking about).



  Mark

P.S.  I notice that several others answered your wirehead query so I 
won't


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Abram Demski
Mark,

Actually I am sympathetic with this idea. I do think good can be
defined. And, I think it can be a simple definition. However, it
doesn't seem right to me to preprogram an AGI with a set ethical
theory; the theory could be wrong, no matter how good it sounds. So,
better to put such ideas in only as probabilistic correlations (or
virtual evidence), and let the system change its beliefs based on
accumulated evidence. I do not think this is overly risky, because
whatever the system comes to believe, its high-level goal will tend to
create normalizing subgoals that will regularize its behavior.

I'll stick to my point about defining make humans happy being hard,
though. Especially with the restriction without modifying them that
you used.

On Thu, Aug 28, 2008 at 12:38 PM, Mark Waser [EMAIL PROTECTED] wrote:
 Also, I should mention that the whole construction becomes irrelevant
 if we can logically describe the goal ahead of time. With the make
 humans happy example, something like my construction would be useful
 if we need to AI to *learn* what a human is and what happy is. (We
 then set up the pleasure in a way that would help the AI attach
 goodness to the right things.) If we are able to write out the
 definitions ahead of time, we can directly specify what goodness is
 instead. But, I think it is unrealistic to take that approach, since
 the definitions would be large and difficult

 :-)  I strongly disagree with you.  Why do you believe that having a new AI
 learn large and difficult definitions is going to be easier and safer than
 specifying them (assuming that the specifications can be grounded in the
 AI's terms)?

 I also disagree that the definitions are going to be as large as people
 believe them to be . . . .

 Let's take the Mandelbroit set as an example.  It is perfectly specified by
 one *very* small formula.  Yet, if you don't know that formula, you could
 spend many lifetimes characterizing it (particularly if you're trying to
 doing it from multiple blurred and shifted  images :-).

 The true problem is that humans can't (yet) agree on what goodness is -- and
 then they get lost arguing over detailed cases instead of focusing on the
 core.

 The core definition of goodness/morality and developing a system to
 determine what actions are good and what actions are not is a project that
 I've been working on for quite some time and I *think* I'm making rather
 good headway.


 - Original Message - From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Thursday, August 28, 2008 9:57 AM
 Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
 AGI (was Re: [agi] The Necessity of Embodiment))


 Hi mark,

 I think the miscommunication is relatively simple...

 On Wed, Aug 27, 2008 at 10:14 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Hi,

  I think that I'm missing some of your points . . . .

 Whatever good is, it cannot be something directly
 observable, or the AI will just wirehead itself (assuming it gets
 intelligent enough to do so, of course).

 I don't understand this unless you mean by directly observable that the
 definition is observable and changeable.  If I define good as making all
 humans happy without modifying them, how would the AI wirehead itself?
 What
 am I missing here?

 When I say directly observable, I mean observable-by-sensation.
 Making all humans happy could not be directly observed unless the AI
 had sensors in the pleasure centers of all humans (in which case it
 would want to wirehead us). Without modifying them couldn't be
 directly observed even then. So, realistically, such a goal needs to
 be inferred from sensory data.

 Also, I should mention that the whole construction becomes irrelevant
 if we can logically describe the goal ahead of time. With the make
 humans happy example, something like my construction would be useful
 if we need to AI to *learn* what a human is and what happy is. (We
 then set up the pleasure in a way that would help the AI attach
 goodness to the right things.) If we are able to write out the
 definitions ahead of time, we can directly specify what goodness is
 instead. But, I think it is unrealistic to take that approach, since
 the definitions would be large and difficult


 So, the AI needs to have a concept of external goodness, with a weak
 probabilistic correlation to its directly observable pleasure.

 I agree with the concept of external goodness but why does the
 correlation
 between external goodness and it's pleasure have to be low?  Why can't
 external goodness directly cause pleasure?  Clearly, it shouldn't believe
 that it's pleasure causes external goodness (that would be reversing
 cause
 and effect and an obvious logic error).

 The correlation needs to be fairly low to allow the concept of good to
 eventually split off of the concept of pleasure in the AI mind. The
 external goodness can't directly cause pleasure because it isn't
 directly detectable. Detection 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser

However, it
doesn't seem right to me to preprogram an AGI with a set ethical
theory; the theory could be wrong, no matter how good it sounds.


Why not wait until a theory is derived before making this decision?

Wouldn't such a theory be a good starting point, at least?


better to put such ideas in only as probabilistic correlations (or
virtual evidence), and let the system change its beliefs based on
accumulated evidence. I do not think this is overly risky, because
whatever the system comes to believe, its high-level goal will tend to
create normalizing subgoals that will regularize its behavior.


You're getting into implementation here but I will make a couple of personal 
belief statements:


1.  Probabilistic correlations are much, *much* more problematical than most 
people are event willing to think about.  They work well with very simple 
examples but they do not scale well at all.  Particularly problematic for 
such correlations is the fact that ethical concepts are generally made up 
*many* interwoven parts and are very fuzzy.  The church of Bayes does not 
cut it for any work where the language/terms/concepts are not perfectly 
crisp, clear, and logically correct.
2.  Statements like its high-level goal will tend to create normalizing 
subgoals that will regularize its behavior sweep *a lot* of detail under 
the rug.  It's possible that it is true.  I think that it is much more 
probable that it is very frequently not true.  Unless you do *a lot* of 
specification, I'm afraid that expecting this to be true is *very* risky.



I'll stick to my point about defining make humans happy being hard,
though. Especially with the restriction without modifying them that
you used.


I think that defining make humans happy is impossible -- but that's OK 
because I think that it's a really bad goal to try to implement.


All I need to do is to define learn, harm, and help.  Help could be defined 
as anything which is agreed to with informed consent by the affected subject 
both before and after the fact.  Yes, that doesn't cover all actions but 
that just means that the AI doesn't necessarily have a strong inclination 
towards those actions.  Harm could be defined as anything which is disagreed 
with (or is expected to be disagreed with) by the affected subject either 
before or after the fact.  Friendliness then turns into something like 
asking permission.  Yes, the Friendly entity won't save you in many 
circumstances, but it's not likely to kill you either.


 Of course, I could also come up with the counter-argument to my own 
thesis that the AI will never do anything because there will always be 
someone who objects to the AI doing *anything* to change the world.-- but 
that's just the absurdity and self-defeating arguments that I expect from 
many of the list denizens that can't be defended against except by 
allocating far more time than it's worth.




- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 1:59 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Mark,

Actually I am sympathetic with this idea. I do think good can be
defined. And, I think it can be a simple definition. However, it
doesn't seem right to me to preprogram an AGI with a set ethical
theory; the theory could be wrong, no matter how good it sounds. So,
better to put such ideas in only as probabilistic correlations (or
virtual evidence), and let the system change its beliefs based on
accumulated evidence. I do not think this is overly risky, because
whatever the system comes to believe, its high-level goal will tend to
create normalizing subgoals that will regularize its behavior.

I'll stick to my point about defining make humans happy being hard,
though. Especially with the restriction without modifying them that
you used.

On Thu, Aug 28, 2008 at 12:38 PM, Mark Waser [EMAIL PROTECTED] wrote:

Also, I should mention that the whole construction becomes irrelevant
if we can logically describe the goal ahead of time. With the make
humans happy example, something like my construction would be useful
if we need to AI to *learn* what a human is and what happy is. (We
then set up the pleasure in a way that would help the AI attach
goodness to the right things.) If we are able to write out the
definitions ahead of time, we can directly specify what goodness is
instead. But, I think it is unrealistic to take that approach, since
the definitions would be large and difficult


:-)  I strongly disagree with you.  Why do you believe that having a new 
AI
learn large and difficult definitions is going to be easier and safer 
than

specifying them (assuming that the specifications can be grounded in the
AI's terms)?

I also disagree that the definitions are going to be as large as people
believe them to be . . . .

Let's take the Mandelbroit set as an example.  It is perfectly 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Abram Demski
Mark,

I still think your definitions still sound difficult to implement,
although not nearly as hard as make humans happy without modifying
them. How would you define consent? You'd need a definition of
decision-making entity, right?

Personally, if I were to take the approach of a preprogrammed ethics,
I would define good in pseudo-evolutionary terms: a pattern/entity is
good if it has high survival value in the long term. Patterns that are
self-sustaining on their own are thus considered good, but patterns
that help sustain other patterns would be too, because they are a
high-utility part of a larger whole.

Actually, that idea is what made me assert that any goal produces
normalizing subgoals. Survivability helps achieve any goal, as long as
it isn't a time-bounded goal (finishing a set task).

--Abram

On Thu, Aug 28, 2008 at 2:52 PM, Mark Waser [EMAIL PROTECTED] wrote:
 However, it
 doesn't seem right to me to preprogram an AGI with a set ethical
 theory; the theory could be wrong, no matter how good it sounds.

 Why not wait until a theory is derived before making this decision?

 Wouldn't such a theory be a good starting point, at least?

 better to put such ideas in only as probabilistic correlations (or
 virtual evidence), and let the system change its beliefs based on
 accumulated evidence. I do not think this is overly risky, because
 whatever the system comes to believe, its high-level goal will tend to
 create normalizing subgoals that will regularize its behavior.

 You're getting into implementation here but I will make a couple of personal
 belief statements:

 1.  Probabilistic correlations are much, *much* more problematical than most
 people are event willing to think about.  They work well with very simple
 examples but they do not scale well at all.  Particularly problematic for
 such correlations is the fact that ethical concepts are generally made up
 *many* interwoven parts and are very fuzzy.  The church of Bayes does not
 cut it for any work where the language/terms/concepts are not perfectly
 crisp, clear, and logically correct.
 2.  Statements like its high-level goal will tend to create normalizing
 subgoals that will regularize its behavior sweep *a lot* of detail under
 the rug.  It's possible that it is true.  I think that it is much more
 probable that it is very frequently not true.  Unless you do *a lot* of
 specification, I'm afraid that expecting this to be true is *very* risky.

 I'll stick to my point about defining make humans happy being hard,
 though. Especially with the restriction without modifying them that
 you used.

 I think that defining make humans happy is impossible -- but that's OK
 because I think that it's a really bad goal to try to implement.

 All I need to do is to define learn, harm, and help.  Help could be defined
 as anything which is agreed to with informed consent by the affected subject
 both before and after the fact.  Yes, that doesn't cover all actions but
 that just means that the AI doesn't necessarily have a strong inclination
 towards those actions.  Harm could be defined as anything which is disagreed
 with (or is expected to be disagreed with) by the affected subject either
 before or after the fact.  Friendliness then turns into something like
 asking permission.  Yes, the Friendly entity won't save you in many
 circumstances, but it's not likely to kill you either.

  Of course, I could also come up with the counter-argument to my own
 thesis that the AI will never do anything because there will always be
 someone who objects to the AI doing *anything* to change the world.-- but
 that's just the absurdity and self-defeating arguments that I expect from
 many of the list denizens that can't be defended against except by
 allocating far more time than it's worth.



 - Original Message - From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Thursday, August 28, 2008 1:59 PM
 Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
 AGI (was Re: [agi] The Necessity of Embodiment))


 Mark,

 Actually I am sympathetic with this idea. I do think good can be
 defined. And, I think it can be a simple definition. However, it
 doesn't seem right to me to preprogram an AGI with a set ethical
 theory; the theory could be wrong, no matter how good it sounds. So,
 better to put such ideas in only as probabilistic correlations (or
 virtual evidence), and let the system change its beliefs based on
 accumulated evidence. I do not think this is overly risky, because
 whatever the system comes to believe, its high-level goal will tend to
 create normalizing subgoals that will regularize its behavior.

 I'll stick to my point about defining make humans happy being hard,
 though. Especially with the restriction without modifying them that
 you used.

 On Thu, Aug 28, 2008 at 12:38 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Also, I should mention that the whole construction becomes irrelevant
 if we can 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser

Personally, if I were to take the approach of a preprogrammed ethics,
I would define good in pseudo-evolutionary terms: a pattern/entity is
good if it has high survival value in the long term. Patterns that are
self-sustaining on their own are thus considered good, but patterns
that help sustain other patterns would be too, because they are a
high-utility part of a larger whole.


Actually, I *do* define good and ethics not only in evolutionary terms but 
as being driven by evolution.  Unlike most people, I believe that ethics is 
*entirely* driven by what is best evolutionarily while not believing at all 
in red in tooth and claw.  I can give you a reading list that shows that 
the latter view is horribly outdated among people who keep up with the 
research rather than just rehashing tired old ideas.



Actually, that idea is what made me assert that any goal produces
normalizing subgoals. Survivability helps achieve any goal, as long as
it isn't a time-bounded goal (finishing a set task).


Ah, I'm starting to get an idea of what you mean behind normalizing subgoals 
. . . .   Yes, absolutely except that I contend that there is exactly one 
normalizing subgoal (though some might phrase it as two) that is normally 
common to virtually every goal (except in very extreme/unusual 
circumstances).



- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 4:04 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Mark,

I still think your definitions still sound difficult to implement,
although not nearly as hard as make humans happy without modifying
them. How would you define consent? You'd need a definition of
decision-making entity, right?

Personally, if I were to take the approach of a preprogrammed ethics,
I would define good in pseudo-evolutionary terms: a pattern/entity is
good if it has high survival value in the long term. Patterns that are
self-sustaining on their own are thus considered good, but patterns
that help sustain other patterns would be too, because they are a
high-utility part of a larger whole.

Actually, that idea is what made me assert that any goal produces
normalizing subgoals. Survivability helps achieve any goal, as long as
it isn't a time-bounded goal (finishing a set task).

--Abram

On Thu, Aug 28, 2008 at 2:52 PM, Mark Waser [EMAIL PROTECTED] wrote:

However, it
doesn't seem right to me to preprogram an AGI with a set ethical
theory; the theory could be wrong, no matter how good it sounds.


Why not wait until a theory is derived before making this decision?

Wouldn't such a theory be a good starting point, at least?


better to put such ideas in only as probabilistic correlations (or
virtual evidence), and let the system change its beliefs based on
accumulated evidence. I do not think this is overly risky, because
whatever the system comes to believe, its high-level goal will tend to
create normalizing subgoals that will regularize its behavior.


You're getting into implementation here but I will make a couple of 
personal

belief statements:

1.  Probabilistic correlations are much, *much* more problematical than 
most

people are event willing to think about.  They work well with very simple
examples but they do not scale well at all.  Particularly problematic for
such correlations is the fact that ethical concepts are generally made up
*many* interwoven parts and are very fuzzy.  The church of Bayes does not
cut it for any work where the language/terms/concepts are not perfectly
crisp, clear, and logically correct.
2.  Statements like its high-level goal will tend to create normalizing
subgoals that will regularize its behavior sweep *a lot* of detail under
the rug.  It's possible that it is true.  I think that it is much more
probable that it is very frequently not true.  Unless you do *a lot* of
specification, I'm afraid that expecting this to be true is *very* risky.


I'll stick to my point about defining make humans happy being hard,
though. Especially with the restriction without modifying them that
you used.


I think that defining make humans happy is impossible -- but that's OK
because I think that it's a really bad goal to try to implement.

All I need to do is to define learn, harm, and help.  Help could be 
defined
as anything which is agreed to with informed consent by the affected 
subject

both before and after the fact.  Yes, that doesn't cover all actions but
that just means that the AI doesn't necessarily have a strong inclination
towards those actions.  Harm could be defined as anything which is 
disagreed

with (or is expected to be disagreed with) by the affected subject either
before or after the fact.  Friendliness then turns into something like
asking permission.  Yes, the Friendly entity won't save you in many
circumstances, but it's not likely to kill you either.

 Of course, I could also come up 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Terren Suydam

--- On Thu, 8/28/08, Mark Waser [EMAIL PROTECTED] wrote:
 Actually, I *do* define good and ethics not only in
 evolutionary terms but 
 as being driven by evolution.  Unlike most people, I
 believe that ethics is 
 *entirely* driven by what is best evolutionarily while not
 believing at all 
 in red in tooth and claw.  I can give you a
 reading list that shows that 
 the latter view is horribly outdated among people who keep
 up with the 
 research rather than just rehashing tired old ideas.

I think it's a stretch to derive ethical ideas from what you refer to as best 
evolutionarily.  Parasites are pretty freaking successful, from an 
evolutionary point of view, but nobody would say parasitism is ethical.

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Matt Mahoney
Valentina Poletti [EMAIL PROTECTED] wrote:
 Got ya, thanks for the clarification. That brings up another question. Why do 
 we want to make an AGI?

I'm glad somebody is finally asking the right question, instead of skipping 
over the specification to the design phase. It would avoid a lot of 
philosophical discussions that result from people having different ideas of 
what AGI should do.

AGI could replace all human labor, worth about US $2 to $5 quadrillion over the 
next 30 years. We should expect the cost to be of this magnitude, given that 
having it sooner is better than waiting.

I think AGI will be immensely complex, on the order of 10^18 bits, 
decentralized, competitive, with distributed ownership, like today's internet 
but smarter. It will converse with you fluently but know too much to pass the 
Turing test. We will be totally dependent on it.

-- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Matt Mahoney
Nobody wants to enter a mental state where thinking and awareness are 
unpleasant, at least when I describe it that way. My point is that having 
everything you want is not the utopia that many people think it is. But it is 
where we are headed.

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 9:18:05 AM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))

 No, the state of ultimate bliss that you, I, and all other rational, goal 
 seeking agents seek

Your second statement copied below not withstanding, I *don't* seek ultimate 
bliss.

 You may say that is not what you want, but only because you are unaware of 
 the possibilities of reprogramming your brain. It is like being opposed to 
 drugs or wireheading. Once you experience it, you can't resist.

It is not what I want *NOW*.  It may be that once my brain has been altered 
by experiencing it, I may want it *THEN* but that has no relevance to what I 
want and seek now.

These statements are just sloppy reasoning . . . .


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Matt Mahoney
I'm not trying to win any arguments, but I am trying to solve the problem of 
whether RSI is possible at all. It is an important question because it 
profoundly affects the path that a singularity would take, and what precautions 
we need to design into AGI. Without RSI, then a singularity has to be a (very 
fast) evolutionary process in which agents compete for computing resources. In 
this scenario, friendliness is stable only to the extent that it contributes to 
fitness and fails when the AGI no longer needs us.

If RSI is possible, then there is the additional threat of a fast takeoff of 
the kind described by Good and Vinge (and step 5 of the OpenCog roadmap). 
Friendliness and ethics are algorithmically complex functions that have to be 
hard coded into the first self-improving agent, and I have little confidence 
that this will happen. An unfriendly agent is much easier to build, so is 
likely to be built first.

I looked at Legg's paper again, and I don't believe it rules out Goedel 
machines. Legg first proved that any program that predicts all infinite 
sequences up to Kolmogorov complexity n must also have complexity n, and then 
proved that except for very small n, that such predictors cannot be proven to 
work. This is a different context than a Goedel machine, which only has to 
learn a specific environment, not a set of environments. I don't know if Legg's 
proof would apply to RSI sequences of increasingly complex environments.

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 11:42:10 AM
Subject: Re: Goedel machines (was Re: Information theoretic approaches to AGI 
(was Re: [agi] The Necessity of Embodiment))

PS-- I have thought of a weak argument:

If a fact is not probabilistically learnable, then it is hard to see
how it has much significance for an AI design. A non-learnable fact
won't reliably change the performance of the AI, since if it did, it
would be learnable. Furthermore, even if there were *some* important
nonlearnable facts, it still seems like significant self-improvements
could be made using only probabilistically learned facts. And, since
the amount of time spent testing cases is a huge factor, RSI will not
stop except due to limited memory, even in a relatively boring
environment (unless the AI makes a rational decision to stop using
resources on RSI since it has found a solution that is probably
optimal).

On Thu, Aug 28, 2008 at 11:25 AM, Abram Demski [EMAIL PROTECTED] wrote:
 Matt,

 Ok, you have me, I admit defeat.

 I could only continue my argument if I could pin down what sorts of
 facts need to be learned with high probability for RSI, and show
 somehow that this set does not include unlearnable facts. Learnable
 facts form a larger set than provable facts, since for example we can
 probabilistically declare that a program never halts if we run it for
 a while and it doesn't. But there are certain facts that are not even
 probabilistically learnable, so until I can show that none of these
 are absolutely essential to RSI, I concede.

 --Abram Demski

 On Wed, Aug 27, 2008 at 6:48 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Abram Demski [EMAIL PROTECTED] wrote:
First, I do not think it is terribly difficult to define a Goedel
machine that does not halt. It interacts with its environment, and
there is some utility value attached to this interaction, and it
attempts to rewrite its code to maximize this utility.

 It's not that the machine halts, but that it makes no further improvements 
 once the best solution is found. This might not be a practical concern if 
 the environment is very complex.

 However, I doubt that a Goedel machine could even be built. Legg showed [1] 
 that Goedel incompleteness is ubiquitous. To paraphrase, beyond some low 
 level of complexity, you can't prove anything. Perhaps this is the reason we 
 have not (AFAIK) built a software model, even for very simple sets of axioms.

 If we resort to probabilistic evidence of improvement rather than proofs, 
 then it is no longer a Goedel machine, and I think we would need 
 experimental verification of RSI. Random modifications of code are much more 
 likely to be harmful than helpful, so we would need to show that 
 improvements could be detected with a very low false positive rate.

 1. http://www.vetta.org/documents/IDSIA-12-06-1.pdf


  -- Matt Mahoney, [EMAIL PROTECTED]



 - Original Message 
 From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Wednesday, August 27, 2008 11:40:24 AM
 Subject: Re: Goedel machines (was Re: Information theoretic approaches to 
 AGI (was Re: [agi] The Necessity of Embodiment))

 Matt,

 Thanks for the reply. There are 3 reasons that I can think of for
 calling Goedel machines bounded:

 1. As you assert, once a solution is found, it stops.
 2. It will be on a finite computer, so it will eventually reach the
 one best version of 

Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mike Tintner
Matt:If RSI is possible, then there is the additional threat of a fast 
takeoff of the kind described by Good and Vinge


Can we have an example of just one or two subject areas or domains where a 
takeoff has been considered (by anyone)  as possibly occurring, and what 
form such a takeoff might take? I hope the discussion of RSI is not entirely 
one of airy generalities, without any grounding in reality. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-28 Thread Matt Mahoney
Here is Vernor Vinge's original essay on the singularity.
http://mindstalk.net/vinge/vinge-sing.html

 
The premise is that if humans can create agents with above human intelligence, 
then so can they. What I am questioning is whether agents at any intelligence 
level can do this. I don't believe that agents at any level can recognize 
higher intelligence, and therefore cannot test their creations. We rely on 
competition in an external environment to make fitness decisions. The parent 
isn't intelligent enough to make the correct choice.

-- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 7:00:07 PM
Subject: Re: Goedel machines (was Re: Information theoretic approaches to AGI 
(was Re: [agi] The Necessity of Embodiment))

Matt:If RSI is possible, then there is the additional threat of a fast 
takeoff of the kind described by Good and Vinge

Can we have an example of just one or two subject areas or domains where a 
takeoff has been considered (by anyone)  as possibly occurring, and what 
form such a takeoff might take? I hope the discussion of RSI is not entirely 
one of airy generalities, without any grounding in reality. 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-28 Thread Mike Tintner

Thanks. But like I said, airy generalities.

That machines can become faster and faster at computations and accumulating 
knowledge is certain. But that's narrow AI.


For general intelligence, you have to be able first to integrate as well as 
accumulate knowledge.  We have learned vast amounts about the brain in the 
last few years, for example - perhaps more than in previous history. But 
this hasn't led to any kind of comparably fast advances in integrating that 
knowledge.


You also have to be able second to discover knowledge  - be creative - fill 
in some of the many gaping holes in every domain of knowledge. That again 
doesn't march to a mathematical formua.


Hence, I suggest, you don't see any glimmers of RSI in any actual domain of 
human knowledge. If it were possible at all you should see some signs 
however small.


The whole idea of RSI strikes me as high-school naive - completely lacking 
in any awareness of the creative, systemic structure of how knowledge and 
technology actually advance in different domains.


Another example: try to recursively improve the car - like every part of 
technology it's not a solitary thing, but bound up in vast technological 
ecosystems (here - roads,oil,gas stations etc etc),  that cannot be improved 
in simple, linear fashion.


Similarly, I suspect each individual's mind/intelligence depends on complex 
interdependent systems and paradigms of knowledge. And so of necessity would 
any AGI's mind. (Not that mind is possible without a body).





Matt: Here is Vernor Vinge's original essay on the singularity.

http://mindstalk.net/vinge/vinge-sing.html


The premise is that if humans can create agents with above human 
intelligence, then so can they. What I am questioning is whether agents at 
any intelligence level can do this. I don't believe that agents at any 
level can recognize higher intelligence, and therefore cannot test their 
creations. We rely on competition in an external environment to make 
fitness decisions. The parent isn't intelligent enough to make the correct 
choice.


-- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 7:00:07 PM
Subject: Re: Goedel machines (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))


Matt:If RSI is possible, then there is the additional threat of a fast
takeoff of the kind described by Good and Vinge

Can we have an example of just one or two subject areas or domains where a
takeoff has been considered (by anyone)  as possibly occurring, and what
form such a takeoff might take? I hope the discussion of RSI is not 
entirely

one of airy generalities, without any grounding in reality.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-28 Thread j.k.

On 08/28/2008 04:47 PM, Matt Mahoney wrote:

The premise is that if humans can create agents with above human intelligence, 
then so can they. What I am questioning is whether agents at any intelligence 
level can do this. I don't believe that agents at any level can recognize 
higher intelligence, and therefore cannot test their creations.


The premise is not necessary to arrive at greater than human 
intelligence. If a human can create an agent of equal intelligence, it 
will rapidly become more intelligent (in practical terms) if advances in 
computing technologies continue to occur.


An AGI with an intelligence the equivalent of a 99.-percentile human 
might be creatable, recognizable and testable by a human (or group of 
humans) of comparable intelligence. That same AGI at some later point in 
time, doing nothing differently except running 31 million times faster, 
will accomplish one genius-year of work every second. I would argue that 
by any sensible definition of intelligence, we would have a 
greater-than-human intelligence that was not created by a being of 
lesser intelligence.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser
   Parasites are very successful at surviving but they don't have other 
goals.  Try being parasitic *and* succeeding at goals other than survival. 
I think you'll find that your parasitic ways will rapidly get in the way of 
your other goals the second that you need help (or even non-interference) 
from others.


- Original Message - 
From: Terren Suydam [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 5:03 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))





--- On Thu, 8/28/08, Mark Waser [EMAIL PROTECTED] wrote:

Actually, I *do* define good and ethics not only in
evolutionary terms but
as being driven by evolution.  Unlike most people, I
believe that ethics is
*entirely* driven by what is best evolutionarily while not
believing at all
in red in tooth and claw.  I can give you a
reading list that shows that
the latter view is horribly outdated among people who keep
up with the
research rather than just rehashing tired old ideas.


I think it's a stretch to derive ethical ideas from what you refer to as 
best evolutionarily.  Parasites are pretty freaking successful, from an 
evolutionary point of view, but nobody would say parasitism is ethical.


Terren





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Terren Suydam

Hi Mark,

Obviously you need to complicated your original statement I believe that 
ethics is *entirely* driven by what is best evolutionarily... in such a way 
that we don't derive ethics from parasites. You did that by invoking social 
behavior - parasites are not social beings. 

So from there you need to identify how evolution operates in social groups in 
such a way that you can derive ethics. As Matt alluded to before, would you 
agree that ethics is the result of group selection? In other words, that human 
collectives with certain taboos make the group as a whole more likely to 
persist?

Terren


--- On Thu, 8/28/08, Mark Waser [EMAIL PROTECTED] wrote:

 From: Mark Waser [EMAIL PROTECTED]
 Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
 Re: [agi] The Necessity of Embodiment))
 To: agi@v2.listbox.com
 Date: Thursday, August 28, 2008, 9:21 PM
 Parasites are very successful at surviving but they
 don't have other 
 goals.  Try being parasitic *and* succeeding at goals other
 than survival. 
 I think you'll find that your parasitic ways will
 rapidly get in the way of 
 your other goals the second that you need help (or even
 non-interference) 
 from others.
 
 - Original Message - 
 From: Terren Suydam [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Thursday, August 28, 2008 5:03 PM
 Subject: Re: AGI goals (was Re: Information theoretic
 approaches to AGI (was 
 Re: [agi] The Necessity of Embodiment))
 
 
 
  --- On Thu, 8/28/08, Mark Waser
 [EMAIL PROTECTED] wrote:
  Actually, I *do* define good and ethics not only
 in
  evolutionary terms but
  as being driven by evolution.  Unlike most people,
 I
  believe that ethics is
  *entirely* driven by what is best evolutionarily
 while not
  believing at all
  in red in tooth and claw.  I can give
 you a
  reading list that shows that
  the latter view is horribly outdated among people
 who keep
  up with the
  research rather than just rehashing tired old
 ideas.
 
  I think it's a stretch to derive ethical ideas
 from what you refer to as 
  best evolutionarily.  Parasites are pretty
 freaking successful, from an 
  evolutionary point of view, but nobody would say
 parasitism is ethical.
 
  Terren
 
 
 
 
 
  ---
  agi
  Archives:
 https://www.listbox.com/member/archive/303/=now
  RSS Feed:
 https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: 
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
  
 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Matt Mahoney
Abram Demski [EMAIL PROTECTED] wrote:
Matt,

What is your opinion on Goedel machines?

 http://www.idsia.ch/~juergen/goedelmachine.html


Thanks for the link. If I understand correctly, this is a form of bounded RSI, 
so it could not lead to a singularity. A Goedel machine is functionally 
equivalent to AIXI^tl in that it finds the optimal reinforcement learning 
solution given a fixed environment and utility function. The difference is that 
AIXI^tl does a brute force search of all machines up to length l for time t 
each, so it run in O(t 2^l) time. A Goedel machine achieves the same result 
more efficiently through a series of self improvments by proving that each 
proposed modification (including modifications to its own proof search code) is 
a actual improvement. It does this by using an instruction set such that it is 
impossible to construct incorrect proof verification code.

What I am looking for is unbounded RSI capable of increasing intelligence. A 
Goedel machine doesn't do this because once it finds a solution, it stops. This 
is the same problem as a chess playing program that plays randomly modified 
copies of itself in death matches. At some point, it completely solves the 
chess problem and stops improving.

Ideally we should use a scalable test for intelligence such as Legg and 
Hutter's universal intelligence, which measures expected accumulated reward 
over a Solomonoff distribution of environments (random programs). We can't 
compute this exactly because it requires testing an infinite number of 
environments, but we can approximate it to arbitrary precision by randomly 
sampling environments.

RSI would require a series of increasingly complex test environments because 
otherwise there is an exact solution such that RSI would stop once found. For 
any environment with Kolmogorov complexity l, and agent can guess all 
environments up to length l. But this means that RSI cannot be implemented by a 
Turing machine because a parent with complexity l cannot test its children 
because it cannot create environments with complexity greater than l.

RSI would be possible with a true source of randomness. A parent could create 
arbitrarily complex environments by flipping a coin. In practice, we usually 
ignore the difference between pseudo-random sources and true random sources. 
But in the context of Turing machines that can execute exponential complexity 
algorithms efficiently, we can't do this because the child could easily guess 
the parent's generator, which has low complexity.

One could argue that the real universe does have true random sources, such as 
quantum mechanics. I am not convinced. The universe does have a definite 
quantum state, but it is not possible to know it because a memory within the 
universe cannot have more information than the universe. Therefore, any theory 
of physics must appear random.
 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, August 25, 2008 3:30:59 PM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)

Matt,

What is your opinion on Goedel machines?

http://www.idsia.ch/~juergen/goedelmachine.html

--Abram

On Sun, Aug 24, 2008 at 5:46 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Eric Burton [EMAIL PROTECTED] wrote:


These have profound impacts on AGI design. First, AIXI is (provably) not 
computable,
which means there is no easy shortcut to AGI. Second, universal intelligence 
is not
computable because it requires testing in an infinite number of 
environments. Since
there is no other well accepted test of intelligence above human level, it 
casts doubt on
the main premise of the singularity: that if humans can create agents with 
greater than
human intelligence, then so can they.

I don't know for sure that these statements logically follow from one
another.

 They don't. I cannot prove that there is no non-evolutionary model of 
 recursive self improvement (RSI). Nor can I prove that there is. But it is a 
 question we need to answer before an evolutionary model becomes technically 
 feasible, because an evolutionary model is definitely unfriendly.

Higher intelligence bootstrapping itself has already been proven on
Earth. Presumably it can happen in a simulation space as well, right?

 If you mean the evolution of humans, that is not an example of RSI. One 
 requirement of friendly AI is that an AI cannot alter its human-designed 
 goals. (Another is that we get the goals right, which is unsolved). However, 
 in an evolutionary environment, the parents do not get to choose the goals of 
 their children. Evolution chooses goals that maximize reproductive fitness, 
 regardless of what you want.

 I have challenged this list as well as the singularity and SL4 lists to come 
 up with an example of a mathematical, software, biological, or physical 
 example of RSI, or at least a plausible argument that one could 

AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Matt Mahoney
An AGI will not design its goals. It is up to humans to define the goals of an 
AGI, so that it will do what we want it to do.

Unfortunately, this is a problem. We may or may not be successful in 
programming the goals of AGI to satisfy human goals. If we are not successful, 
then AGI will be useless at best and dangerous at worst. If we are successful, 
then we are doomed because human goals evolved in a primitive environment to 
maximize reproductive success and not in an environment where advanced 
technology can give us whatever we want. AGI will allow us to connect our 
brains to simulated worlds with magic genies, or worse, allow us to directly 
reprogram our brains to alter our memories, goals, and thought processes. All 
rational goal-seeking agents must have a mental state of maximum utility where 
any thought or perception would be unpleasant because it would result in a 
different state.

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Valentina Poletti [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, August 26, 2008 11:34:56 AM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)


Thanks very much for the info. I found those articles very interesting. 
Actually though this is not quite what I had in mind with the term 
information-theoretic approach. I wasn't very specific, my bad. What I am 
looking for is a a theory behind the actual R itself. These approaches 
(correnct me if I'm wrong) give an r-function for granted and work from that. 
In real life that is not the case though. What I'm looking for is how the AGI 
will create that function. Because the AGI is created by humans, some sort of 
direction will be given by the humans creating them. What kind of direction, in 
mathematical terms, is my question. In other words I'm looking for a way to 
mathematically define how the AGI will mathematically define its goals.
 
Valentina

 
On 8/23/08, Matt Mahoney [EMAIL PROTECTED] wrote: 
Valentina Poletti [EMAIL PROTECTED] wrote:
 I was wondering why no-one had brought up the information-theoretic aspect of 
 this yet.

It has been studied. For example, Hutter proved that the optimal strategy of a 
rational goal seeking agent in an unknown computable environment is AIXI: to 
guess that the environment is simulated by the shortest program consistent with 
observation so far [1]. Legg and Hutter also propose as a measure of universal 
intelligence the expected reward over a Solomonoff distribution of environments 
[2].

These have profound impacts on AGI design. First, AIXI is (provably) not 
computable, which means there is no easy shortcut to AGI. Second, universal 
intelligence is not computable because it requires testing in an infinite 
number of environments. Since there is no other well accepted test of 
intelligence above human level, it casts doubt on the main premise of the 
singularity: that if humans can create agents with greater than human 
intelligence, then so can they.

Prediction is central to intelligence, as I argue in [3]. Legg proved in [4] 
that there is no elegant theory of prediction. Predicting all environments up 
to a given level of Kolmogorov complexity requires a predictor with at least 
the same level of complexity. Furthermore, above a small level of complexity, 
such predictors cannot be proven because of Godel incompleteness. Prediction 
must therefore be an experimental science.

There is currently no software or mathematical model of non-evolutionary 
recursive self improvement, even for very restricted or simple definitions of 
intelligence. Without a model you don't have friendly AI; you have accelerated 
evolution with AIs competing for resources.

References

1. Hutter, Marcus (2003), A Gentle Introduction to The Universal Algorithmic 
Agent {AIXI},
in Artificial General Intelligence, B. Goertzel and C. Pennachin eds., 
Springer. http://www.idsia.ch/~marcus/ai/aixigentle.htm

2. Legg, Shane, and Marcus Hutter (2006),
A Formal Measure of Machine Intelligence, Proc. Annual machine
learning conference of Belgium and The Netherlands (Benelearn-2006).
Ghent, 2006.  http://www.vetta.org/documents/ui_benelearn.pdf

3. http://cs.fit.edu/~mmahoney/compression/rationale.html

4. Legg, Shane, (2006), Is There an Elegant Universal Theory of Prediction?,
Technical Report IDSIA-12-06, IDSIA / USI-SUPSI,
Dalle Molle Institute for Artificial Intelligence, Galleria 2, 6928 Manno, 
Switzerland.
http://www.vetta.org/documents/IDSIA-12-06-1.pdf

-- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser
 All rational goal-seeking agents must have a mental state of maximum utility 
 where any thought or perception would be unpleasant because it would result 
 in a different state.

I'd love to see you attempt to prove the above statement.

What if there are several states with utility equal to or very close to the 
maximum?  What if the utility of the state decreases the longer that you are in 
it (something that is *very* true of human beings)?  What if uniqueness raises 
the utility of any new state sufficient that there will always be states that 
are better than the current state (since experiencing uniqueness normally 
improves fitness through learning, etc)?

  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Wednesday, August 27, 2008 10:52 AM
  Subject: AGI goals (was Re: Information theoretic approaches to AGI (was Re: 
[agi] The Necessity of Embodiment))


  An AGI will not design its goals. It is up to humans to define the goals of 
an AGI, so that it will do what we want it to do.

  Unfortunately, this is a problem. We may or may not be successful in 
programming the goals of AGI to satisfy human goals. If we are not successful, 
then AGI will be useless at best and dangerous at worst. If we are successful, 
then we are doomed because human goals evolved in a primitive environment to 
maximize reproductive success and not in an environment where advanced 
technology can give us whatever we want. AGI will allow us to connect our 
brains to simulated worlds with magic genies, or worse, allow us to directly 
reprogram our brains to alter our memories, goals, and thought processes. All 
rational goal-seeking agents must have a mental state of maximum utility where 
any thought or perception would be unpleasant because it would result in a 
different state.


  -- Matt Mahoney, [EMAIL PROTECTED]




  - Original Message 
  From: Valentina Poletti [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Tuesday, August 26, 2008 11:34:56 AM
  Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)


  Thanks very much for the info. I found those articles very interesting. 
Actually though this is not quite what I had in mind with the term 
information-theoretic approach. I wasn't very specific, my bad. What I am 
looking for is a a theory behind the actual R itself. These approaches 
(correnct me if I'm wrong) give an r-function for granted and work from that. 
In real life that is not the case though. What I'm looking for is how the AGI 
will create that function. Because the AGI is created by humans, some sort of 
direction will be given by the humans creating them. What kind of direction, in 
mathematical terms, is my question. In other words I'm looking for a way to 
mathematically define how the AGI will mathematically define its goals.

  Valentina

   
  On 8/23/08, Matt Mahoney [EMAIL PROTECTED] wrote: 
Valentina Poletti [EMAIL PROTECTED] wrote:
 I was wondering why no-one had brought up the information-theoretic 
aspect of this yet.

It has been studied. For example, Hutter proved that the optimal strategy 
of a rational goal seeking agent in an unknown computable environment is AIXI: 
to guess that the environment is simulated by the shortest program consistent 
with observation so far [1]. Legg and Hutter also propose as a measure of 
universal intelligence the expected reward over a Solomonoff distribution of 
environments [2].

These have profound impacts on AGI design. First, AIXI is (provably) not 
computable, which means there is no easy shortcut to AGI. Second, universal 
intelligence is not computable because it requires testing in an infinite 
number of environments. Since there is no other well accepted test of 
intelligence above human level, it casts doubt on the main premise of the 
singularity: that if humans can create agents with greater than human 
intelligence, then so can they.

Prediction is central to intelligence, as I argue in [3]. Legg proved in 
[4] that there is no elegant theory of prediction. Predicting all environments 
up to a given level of Kolmogorov complexity requires a predictor with at least 
the same level of complexity. Furthermore, above a small level of complexity, 
such predictors cannot be proven because of Godel incompleteness. Prediction 
must therefore be an experimental science.

There is currently no software or mathematical model of non-evolutionary 
recursive self improvement, even for very restricted or simple definitions of 
intelligence. Without a model you don't have friendly AI; you have accelerated 
evolution with AIs competing for resources.

References

1. Hutter, Marcus (2003), A Gentle Introduction to The Universal 
Algorithmic Agent {AIXI},
in Artificial General Intelligence, B. Goertzel and C. Pennachin eds., 
Springer. http://www.idsia.ch/~marcus/ai/aixigentle.htm

2. Legg, Shane, and Marcus Hutter (2006),

Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-27 Thread Matt Mahoney
John, are any of your peer-reviewed papers online? I can't seem to find them...

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: John LaMuth [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, August 26, 2008 2:35:10 AM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)

 
Matt
Below is a sampling 
of my peer reviewed conference presentations on my background ethical theory 
...
This should elevate 
me above the common crackpot
#
Talks 
* Presentation of a paper  at ISSS 2000 (International Society for 
Systems Sciences) Conference in  Toronto, Canada on various aspects of the new 
science of Powerplay Politics.
  · Toward a Science of 
Consciousness: TUCSON April 8–12, 2002 Tucson Convention Center, Tucson,  
 Arizona-sponsored by the Center for Consciousness 
Studies-University of 
Arizona (poster presentation).
  ·  John presented a 
poster at the 8th International Tsukaba Bioethics Conference at 
Tsukaba, 
  Japan on Feb. 15 to 17,  2003. 
  ·  John has 
presented his paper – “The Communicational Factors Underlying the Mental   
  Disorders” at the 2006 Annual Conf. of the Western Psychological 
Association at Palm Springs, CA 
Honors
* Honors Diploma  for Research in Biological Sciences (June 1977) - 
Univ. of Calif.  Irvine.
* John  is a member of the APA and the American Philosophical 
Association.
LaMuth, J. E. (1977). The 
Development of the Forebrain as an Elementary Function of
   the Parameters of Input Specificity and Phylogenetic 
Age.
J. U-grad Rsch: Bio. Sci. U. C. Irvine. (6): 
274-294.
  
LaMuth, J. E. (2000). A Holistic 
Model of Ethical Behavior Based Upon a Metaperspectival 
Hierarchy of the Traditional Groupings of Virtue, 
Values,  Ideals. Proceedings of the 44th Annual World Congress for 
the Int.
Society for the Systems Sciences – Toronto.
   
LaMuth, J. E. (2003). Inductive 
Inference Affective Language Analyzer 
Simulating AI. - US Patent # 6,587,846.
LaMuth, J. E. (2004). Behavioral 
Foundations for the Behaviourome / Mind Mapping 
Project. 
Proceedings for the  Eighth 
International Tsukuba Bioethics 
Roundtable,Tsukuba, Japan.
LaMuth, J. E. (2005). A Diagnostic 
Classification of the Emotions: A Three-Digit Coding 
System for Affective Language. Lucerne Valley: Fairhaven.
  
LaMuth, J. E. (2007). Inductive 
Inference Affective Language Analyzer 
Simulating Transitional AI. - US Patent # 7,236,963.
 
**
Although I currently have no working 
model, I am collaborating on a working prototype.
 
I was responding to your challenge 
for ...an example of a mathematical, software, 
biological, or physical example of RSI, or at least a plausible argument that 
one could be created
 
I feel I 
have proposed a plausible argument, and considering the great stakes 
involved concerning ethical safeguards for AI, an avenue worthy of critique 
...
 More on this in the last half of )
www.angelfire.com/rnb/fairhaven/specs.html 
 
 
John LaMuth
 
www.ethicalvalues.com  
 
 
 
 
- Original Message - 
From: Matt  Mahoney 
To: agi@v2.listbox.com 
Sent: Monday, August 25, 2008 7:30  AM
Subject: Re: Information theoretic  approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)

John,  I have looked at your patent and various web pages. You list a lot of 
nice  sounding ethical terms (honor, love, hope, peace, etc) but give no 
details on  how to implement them. You have already admitted that you have no 
experimental  results, haven't actually built anything, and have no other 
results such as  refereed conference or journal papers describing your system. 
If I am wrong  about this, please let me know.

 -- Matt Mahoney,  [EMAIL PROTECTED] 



-  Original Message 
From: John LaMuth [EMAIL PROTECTED]
To:  agi@v2.listbox.com
Sent: Sunday, August 24, 2008 11:21:30 PM
Subject:  Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of  Embodiment)

 
 
- Original Message -  
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, August 24, 2008 2:46 
PM
Subject: Re: Information theoretic approaches to  AGI (was Re: [agi] The 
Necessity of Embodiment)

I have challenged this list as well  as the singularity and SL4 lists to come 
up with an example of a mathematical,  software, biological, or physical 
example of RSI, or at least a plausible  argument that one could be created, 
and nobody has. To qualify, an  agent has to modify itself or create a more 
intelligent copy of itself  according to an intelligence test chosen by the 
original. The following are  not examples of RSI:
 
 1. Evolution of life, including  humans.
 2. Emergence of language, culture, writing, communication  

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser
 It is up to humans to define the goals of an AGI, so that it will do what we 
 want it to do.

Why must we define the goals of an AGI?  What would be wrong with setting it 
off with strong incentives to be helpful, even stronger incentives to not be 
harmful, and let it chart it's own course based upon the vagaries of the world? 
 Let it's only hard-coded goal be to keep it's satisfaction above a certain 
level with helpful actions increasing satisfaction, harmful actions heavily 
decreasing satisfaction; learning increasing satisfaction, and satisfaction 
naturally decaying over time so as to promote action . . . .

Seems to me that humans are pretty much coded that way (with evolution's 
additional incentives of self-defense and procreation).  The real trick of the 
matter is defining helpful and harmful clearly but everyone is still mired five 
steps before that.



  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Wednesday, August 27, 2008 10:52 AM
  Subject: AGI goals (was Re: Information theoretic approaches to AGI (was Re: 
[agi] The Necessity of Embodiment))


  An AGI will not design its goals. It is up to humans to define the goals of 
an AGI, so that it will do what we want it to do.

  Unfortunately, this is a problem. We may or may not be successful in 
programming the goals of AGI to satisfy human goals. If we are not successful, 
then AGI will be useless at best and dangerous at worst. If we are successful, 
then we are doomed because human goals evolved in a primitive environment to 
maximize reproductive success and not in an environment where advanced 
technology can give us whatever we want. AGI will allow us to connect our 
brains to simulated worlds with magic genies, or worse, allow us to directly 
reprogram our brains to alter our memories, goals, and thought processes. All 
rational goal-seeking agents must have a mental state of maximum utility where 
any thought or perception would be unpleasant because it would result in a 
different state.


  -- Matt Mahoney, [EMAIL PROTECTED]




  - Original Message 
  From: Valentina Poletti [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Tuesday, August 26, 2008 11:34:56 AM
  Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)


  Thanks very much for the info. I found those articles very interesting. 
Actually though this is not quite what I had in mind with the term 
information-theoretic approach. I wasn't very specific, my bad. What I am 
looking for is a a theory behind the actual R itself. These approaches 
(correnct me if I'm wrong) give an r-function for granted and work from that. 
In real life that is not the case though. What I'm looking for is how the AGI 
will create that function. Because the AGI is created by humans, some sort of 
direction will be given by the humans creating them. What kind of direction, in 
mathematical terms, is my question. In other words I'm looking for a way to 
mathematically define how the AGI will mathematically define its goals.

  Valentina

   
  On 8/23/08, Matt Mahoney [EMAIL PROTECTED] wrote: 
Valentina Poletti [EMAIL PROTECTED] wrote:
 I was wondering why no-one had brought up the information-theoretic 
aspect of this yet.

It has been studied. For example, Hutter proved that the optimal strategy 
of a rational goal seeking agent in an unknown computable environment is AIXI: 
to guess that the environment is simulated by the shortest program consistent 
with observation so far [1]. Legg and Hutter also propose as a measure of 
universal intelligence the expected reward over a Solomonoff distribution of 
environments [2].

These have profound impacts on AGI design. First, AIXI is (provably) not 
computable, which means there is no easy shortcut to AGI. Second, universal 
intelligence is not computable because it requires testing in an infinite 
number of environments. Since there is no other well accepted test of 
intelligence above human level, it casts doubt on the main premise of the 
singularity: that if humans can create agents with greater than human 
intelligence, then so can they.

Prediction is central to intelligence, as I argue in [3]. Legg proved in 
[4] that there is no elegant theory of prediction. Predicting all environments 
up to a given level of Kolmogorov complexity requires a predictor with at least 
the same level of complexity. Furthermore, above a small level of complexity, 
such predictors cannot be proven because of Godel incompleteness. Prediction 
must therefore be an experimental science.

There is currently no software or mathematical model of non-evolutionary 
recursive self improvement, even for very restricted or simple definitions of 
intelligence. Without a model you don't have friendly AI; you have accelerated 
evolution with AIs competing for resources.

References

1. Hutter, Marcus (2003), A Gentle Introduction to The Universal 

Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Abram Demski
Matt,

Thanks for the reply. There are 3 reasons that I can think of for
calling Goedel machines bounded:

1. As you assert, once a solution is found, it stops.
2. It will be on a finite computer, so it will eventually reach the
one best version of itself that it can reach.
3. It can only make provably correct steps, which is very limiting
thanks to Godel's incompleteness theorem.

I'll try to argue that each of these limits can be overcome in
principle, and we'll see if the result satisfies your RSI criteria.

First, I do not think it is terribly difficult to define a Goedel
machine that does not halt. It interacts with its environment, and
there is some utility value attached to this interaction, and it
attempts to rewrite its code to maximize this utility.

The second and third need to be tackled together, because the main
reason that a Goedel machine can't improve its own hardware is because
there is uncertainty involved, so it would never be provably better.
There is always some chance of hardware malfunction. So, I think it is
necessary to grant the possibility of modifications that are merely
very probably correct. Once this is done, 2 and 3 fall fairly easily,
assuming that the machine begins life with a good probabilistic
learning system. That is a big assumption, but we can grant it for the
moment I think?

For the sake of concreteness, let's say that the utility value is some
(probably very complex) attempt to logically describe Eliezer-style
Friendliness, and that the probabilistic learning system is an
approximation of AIXI (which the system will improve over time along
with everything else). (These two choices don't reflect my personal
tastes, they are just examples.)

By tweaking the allowances the system makes, we might either have a
slow self-improver that is, say, 99.999% probable to only improve
itself in the next 100 years, or a faster self-improver that is 50%
guaranteed.

Does this satisfy your criteria?

On Wed, Aug 27, 2008 at 9:14 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Abram Demski [EMAIL PROTECTED] wrote:
Matt,

What is your opinion on Goedel machines?

 http://www.idsia.ch/~juergen/goedelmachine.html


 Thanks for the link. If I understand correctly, this is a form of bounded 
 RSI, so it could not lead to a singularity. A Goedel machine is functionally 
 equivalent to AIXI^tl in that it finds the optimal reinforcement learning 
 solution given a fixed environment and utility function. The difference is 
 that AIXI^tl does a brute force search of all machines up to length l for 
 time t each, so it run in O(t 2^l) time. A Goedel machine achieves the same 
 result more efficiently through a series of self improvments by proving that 
 each proposed modification (including modifications to its own proof search 
 code) is a actual improvement. It does this by using an instruction set such 
 that it is impossible to construct incorrect proof verification code.

 What I am looking for is unbounded RSI capable of increasing intelligence. A 
 Goedel machine doesn't do this because once it finds a solution, it stops. 
 This is the same problem as a chess playing program that plays randomly 
 modified copies of itself in death matches. At some point, it completely 
 solves the chess problem and stops improving.

 Ideally we should use a scalable test for intelligence such as Legg and 
 Hutter's universal intelligence, which measures expected accumulated reward 
 over a Solomonoff distribution of environments (random programs). We can't 
 compute this exactly because it requires testing an infinite number of 
 environments, but we can approximate it to arbitrary precision by randomly 
 sampling environments.

 RSI would require a series of increasingly complex test environments because 
 otherwise there is an exact solution such that RSI would stop once found. For 
 any environment with Kolmogorov complexity l, and agent can guess all 
 environments up to length l. But this means that RSI cannot be implemented by 
 a Turing machine because a parent with complexity l cannot test its children 
 because it cannot create environments with complexity greater than l.

 RSI would be possible with a true source of randomness. A parent could create 
 arbitrarily complex environments by flipping a coin. In practice, we usually 
 ignore the difference between pseudo-random sources and true random sources. 
 But in the context of Turing machines that can execute exponential complexity 
 algorithms efficiently, we can't do this because the child could easily guess 
 the parent's generator, which has low complexity.

 One could argue that the real universe does have true random sources, such as 
 quantum mechanics. I am not convinced. The universe does have a definite 
 quantum state, but it is not possible to know it because a memory within the 
 universe cannot have more information than the universe. Therefore, any 
 theory of physics must appear random.
  -- Matt Mahoney, [EMAIL PROTECTED]



 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Abram Demski
Mark,

I agree that we are mired 5 steps before that; after all, AGI is not
solved yet, and it is awfully hard to design prefab concepts in a
knowledge representation we know nothing about!

But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?

--Abram

On Wed, Aug 27, 2008 at 11:09 AM, Mark Waser [EMAIL PROTECTED] wrote:
 It is up to humans to define the goals of an AGI, so that it will do what
 we want it to do.

 Why must we define the goals of an AGI?  What would be wrong with setting it
 off with strong incentives to be helpful, even stronger incentives to not be
 harmful, and let it chart it's own course based upon the vagaries of the
 world?  Let it's only hard-coded goal be to keep it's satisfaction above a
 certain level with helpful actions increasing satisfaction, harmful actions
 heavily decreasing satisfaction; learning increasing satisfaction, and
 satisfaction naturally decaying over time so as to promote action . . . .

 Seems to me that humans are pretty much coded that way (with evolution's
 additional incentives of self-defense and procreation).  The real trick of
 the matter is defining helpful and harmful clearly but everyone is still
 mired five steps before that.


 - Original Message -
 From: Matt Mahoney
 To: agi@v2.listbox.com
 Sent: Wednesday, August 27, 2008 10:52 AM
 Subject: AGI goals (was Re: Information theoretic approaches to AGI (was Re:
 [agi] The Necessity of Embodiment))
 An AGI will not design its goals. It is up to humans to define the goals of
 an AGI, so that it will do what we want it to do.

 Unfortunately, this is a problem. We may or may not be successful in
 programming the goals of AGI to satisfy human goals. If we are not
 successful, then AGI will be useless at best and dangerous at worst. If we
 are successful, then we are doomed because human goals evolved in a
 primitive environment to maximize reproductive success and not in an
 environment where advanced technology can give us whatever we want. AGI will
 allow us to connect our brains to simulated worlds with magic genies, or
 worse, allow us to directly reprogram our brains to alter our memories,
 goals, and thought processes. All rational goal-seeking agents must have a
 mental state of maximum utility where any thought or perception would be
 unpleasant because it would result in a different state.

 -- Matt Mahoney, [EMAIL PROTECTED]

 - Original Message 
 From: Valentina Poletti [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Tuesday, August 26, 2008 11:34:56 AM
 Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The
 Necessity of Embodiment)

 Thanks very much for the info. I found those articles very interesting.
 Actually though this is not quite what I had in mind with the term
 information-theoretic approach. I wasn't very specific, my bad. What I am
 looking for is a a theory behind the actual R itself. These approaches
 (correnct me if I'm wrong) give an r-function for granted and work from
 that. In real life that is not the case though. What I'm looking for is how
 the AGI will create that function. Because the AGI is created by humans,
 some sort of direction will be given by the humans creating them. What kind
 of direction, in mathematical terms, is my question. In other words I'm
 looking for a way to mathematically define how the AGI will mathematically
 define its goals.

 Valentina


 On 8/23/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Valentina Poletti [EMAIL PROTECTED] wrote:
  I was wondering why no-one had brought up the information-theoretic
  aspect of this yet.

 It has been studied. For example, Hutter proved that the optimal strategy
 of a rational goal seeking agent in an unknown computable environment is
 AIXI: to guess that the environment is simulated by the shortest program
 consistent with observation so far [1]. Legg and Hutter also propose as a
 measure of universal intelligence the expected reward over a Solomonoff
 distribution of environments [2].

 These have profound impacts on AGI design. First, AIXI is (provably) not
 computable, which means there is no easy shortcut to AGI. Second, universal
 intelligence is not computable because it requires testing in an infinite
 number of environments. Since there is no other well accepted test of
 intelligence above human level, it casts doubt on the main premise of the
 singularity: that if humans can create agents with greater than human
 intelligence, then so can they.

 Prediction is central to intelligence, as I argue in [3]. Legg proved in
 [4] that there is no elegant theory of prediction. Predicting all
 environments up to a given level of Kolmogorov complexity requires a
 predictor with at least the same level of complexity. Furthermore, above a
 small level of complexity, such 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser

But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?


Actually, my description gave the AGI four goals: be helpful, don't be 
harmful, learn, and keep moving.


Learn, all by itself, is going to generate an infinite number of subgoals. 
Learning subgoals will be picked based upon what is most likely to learn the 
most while not being harmful.


(and, by the way, be helpful and learn should both generate a 
self-protection sub-goal  in short order with procreation following 
immediately behind)


Arguably, be helpful would generate all three of the other goals but 
learning and not being harmful without being helpful is a *much* better 
goal-set for a novice AI to prevent accidents when the AI thinks it is 
being helpful.  In fact, I've been tempted at times to entirely drop the be 
helpful since the other two will eventually generate it with a lessened 
probability of trying-to-be-helpful accidents.


Don't be harmful by itself will just turn the AI off.

The trick is that there needs to be a balance between goals.  Any single 
goal intelligence is likely to be lethal even if that goal is to help 
humanity.


Learn, do no harm, help.  Can anyone come up with a better set of goals? 
(and, once again, note that learn does *not* override the other two -- there 
is meant to be a balance between the three).


- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 11:52 AM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Mark,

I agree that we are mired 5 steps before that; after all, AGI is not
solved yet, and it is awfully hard to design prefab concepts in a
knowledge representation we know nothing about!

But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?

--Abram

On Wed, Aug 27, 2008 at 11:09 AM, Mark Waser [EMAIL PROTECTED] wrote:
It is up to humans to define the goals of an AGI, so that it will do 
what

we want it to do.


Why must we define the goals of an AGI?  What would be wrong with setting 
it
off with strong incentives to be helpful, even stronger incentives to not 
be

harmful, and let it chart it's own course based upon the vagaries of the
world?  Let it's only hard-coded goal be to keep it's satisfaction above 
a
certain level with helpful actions increasing satisfaction, harmful 
actions

heavily decreasing satisfaction; learning increasing satisfaction, and
satisfaction naturally decaying over time so as to promote action . . . .

Seems to me that humans are pretty much coded that way (with evolution's
additional incentives of self-defense and procreation).  The real trick 
of

the matter is defining helpful and harmful clearly but everyone is still
mired five steps before that.


- Original Message -
From: Matt Mahoney
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 10:52 AM
Subject: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re:

[agi] The Necessity of Embodiment))
An AGI will not design its goals. It is up to humans to define the goals 
of

an AGI, so that it will do what we want it to do.

Unfortunately, this is a problem. We may or may not be successful in
programming the goals of AGI to satisfy human goals. If we are not
successful, then AGI will be useless at best and dangerous at worst. If 
we

are successful, then we are doomed because human goals evolved in a
primitive environment to maximize reproductive success and not in an
environment where advanced technology can give us whatever we want. AGI 
will

allow us to connect our brains to simulated worlds with magic genies, or
worse, allow us to directly reprogram our brains to alter our memories,
goals, and thought processes. All rational goal-seeking agents must have 
a

mental state of maximum utility where any thought or perception would be
unpleasant because it would result in a different state.

-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Valentina Poletti [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, August 26, 2008 11:34:56 AM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The
Necessity of Embodiment)

Thanks very much for the info. I found those articles very interesting.
Actually though this is not quite what I had in mind with the term
information-theoretic approach. I wasn't very specific, my bad. What I am
looking for is a a theory behind the actual R itself. These approaches
(correnct me if I'm wrong) give an r-function for granted and work from
that. In real life that is not the case 

Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Eric Burton
I think if an artificial intelligence of length n was able to fully
grok itself and had a space of at least n in which to try out
modifications, it would be pretty simple for that intelligence to
figure out when the intelligences it's engineering in the allocated
space exhibit shiny new featuresets in places where it falls down. In
terms of suitability-to-task as an ethical replacement for human
decision-making, or somesuch. Surely!

Eric B

On 8/27/08, Abram Demski [EMAIL PROTECTED] wrote:
 Matt,

 Thanks for the reply. There are 3 reasons that I can think of for
 calling Goedel machines bounded:

 1. As you assert, once a solution is found, it stops.
 2. It will be on a finite computer, so it will eventually reach the
 one best version of itself that it can reach.
 3. It can only make provably correct steps, which is very limiting
 thanks to Godel's incompleteness theorem.

 I'll try to argue that each of these limits can be overcome in
 principle, and we'll see if the result satisfies your RSI criteria.

 First, I do not think it is terribly difficult to define a Goedel
 machine that does not halt. It interacts with its environment, and
 there is some utility value attached to this interaction, and it
 attempts to rewrite its code to maximize this utility.

 The second and third need to be tackled together, because the main
 reason that a Goedel machine can't improve its own hardware is because
 there is uncertainty involved, so it would never be provably better.
 There is always some chance of hardware malfunction. So, I think it is
 necessary to grant the possibility of modifications that are merely
 very probably correct. Once this is done, 2 and 3 fall fairly easily,
 assuming that the machine begins life with a good probabilistic
 learning system. That is a big assumption, but we can grant it for the
 moment I think?

 For the sake of concreteness, let's say that the utility value is some
 (probably very complex) attempt to logically describe Eliezer-style
 Friendliness, and that the probabilistic learning system is an
 approximation of AIXI (which the system will improve over time along
 with everything else). (These two choices don't reflect my personal
 tastes, they are just examples.)

 By tweaking the allowances the system makes, we might either have a
 slow self-improver that is, say, 99.999% probable to only improve
 itself in the next 100 years, or a faster self-improver that is 50%
 guaranteed.

 Does this satisfy your criteria?

 On Wed, Aug 27, 2008 at 9:14 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Abram Demski [EMAIL PROTECTED] wrote:
Matt,

What is your opinion on Goedel machines?

 http://www.idsia.ch/~juergen/goedelmachine.html


 Thanks for the link. If I understand correctly, this is a form of bounded
 RSI, so it could not lead to a singularity. A Goedel machine is
 functionally equivalent to AIXI^tl in that it finds the optimal
 reinforcement learning solution given a fixed environment and utility
 function. The difference is that AIXI^tl does a brute force search of all
 machines up to length l for time t each, so it run in O(t 2^l) time. A
 Goedel machine achieves the same result more efficiently through a series
 of self improvments by proving that each proposed modification (including
 modifications to its own proof search code) is a actual improvement. It
 does this by using an instruction set such that it is impossible to
 construct incorrect proof verification code.

 What I am looking for is unbounded RSI capable of increasing intelligence.
 A Goedel machine doesn't do this because once it finds a solution, it
 stops. This is the same problem as a chess playing program that plays
 randomly modified copies of itself in death matches. At some point, it
 completely solves the chess problem and stops improving.

 Ideally we should use a scalable test for intelligence such as Legg and
 Hutter's universal intelligence, which measures expected accumulated
 reward over a Solomonoff distribution of environments (random programs).
 We can't compute this exactly because it requires testing an infinite
 number of environments, but we can approximate it to arbitrary precision
 by randomly sampling environments.

 RSI would require a series of increasingly complex test environments
 because otherwise there is an exact solution such that RSI would stop once
 found. For any environment with Kolmogorov complexity l, and agent can
 guess all environments up to length l. But this means that RSI cannot be
 implemented by a Turing machine because a parent with complexity l cannot
 test its children because it cannot create environments with complexity
 greater than l.

 RSI would be possible with a true source of randomness. A parent could
 create arbitrarily complex environments by flipping a coin. In practice,
 we usually ignore the difference between pseudo-random sources and true
 random sources. But in the context of Turing machines that can execute
 exponential complexity algorithms 

Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Eric Burton
What about raising thousands of generations of these things, whole
civilizations comprised of individual instances, then frozen at a
point of enlightenment to cherry-pick the population? You can have it
educated and bred and raised and everything by a real lineage in a VR
world with Earth-accurate physics so that we can use all their
technology immediately. This kind of short-circuits the grounding
problem with for instance automated research and is I think a really
compelling vision

On 8/27/08, Eric Burton [EMAIL PROTECTED] wrote:
 I think if an artificial intelligence of length n was able to fully
 grok itself and had a space of at least n in which to try out
 modifications, it would be pretty simple for that intelligence to
 figure out when the intelligences it's engineering in the allocated
 space exhibit shiny new featuresets in places where it falls down. In
 terms of suitability-to-task as an ethical replacement for human
 decision-making, or somesuch. Surely!

 Eric B

 On 8/27/08, Abram Demski [EMAIL PROTECTED] wrote:
 Matt,

 Thanks for the reply. There are 3 reasons that I can think of for
 calling Goedel machines bounded:

 1. As you assert, once a solution is found, it stops.
 2. It will be on a finite computer, so it will eventually reach the
 one best version of itself that it can reach.
 3. It can only make provably correct steps, which is very limiting
 thanks to Godel's incompleteness theorem.

 I'll try to argue that each of these limits can be overcome in
 principle, and we'll see if the result satisfies your RSI criteria.

 First, I do not think it is terribly difficult to define a Goedel
 machine that does not halt. It interacts with its environment, and
 there is some utility value attached to this interaction, and it
 attempts to rewrite its code to maximize this utility.

 The second and third need to be tackled together, because the main
 reason that a Goedel machine can't improve its own hardware is because
 there is uncertainty involved, so it would never be provably better.
 There is always some chance of hardware malfunction. So, I think it is
 necessary to grant the possibility of modifications that are merely
 very probably correct. Once this is done, 2 and 3 fall fairly easily,
 assuming that the machine begins life with a good probabilistic
 learning system. That is a big assumption, but we can grant it for the
 moment I think?

 For the sake of concreteness, let's say that the utility value is some
 (probably very complex) attempt to logically describe Eliezer-style
 Friendliness, and that the probabilistic learning system is an
 approximation of AIXI (which the system will improve over time along
 with everything else). (These two choices don't reflect my personal
 tastes, they are just examples.)

 By tweaking the allowances the system makes, we might either have a
 slow self-improver that is, say, 99.999% probable to only improve
 itself in the next 100 years, or a faster self-improver that is 50%
 guaranteed.

 Does this satisfy your criteria?

 On Wed, Aug 27, 2008 at 9:14 AM, Matt Mahoney [EMAIL PROTECTED]
 wrote:
 Abram Demski [EMAIL PROTECTED] wrote:
Matt,

What is your opinion on Goedel machines?

 http://www.idsia.ch/~juergen/goedelmachine.html


 Thanks for the link. If I understand correctly, this is a form of bounded
 RSI, so it could not lead to a singularity. A Goedel machine is
 functionally equivalent to AIXI^tl in that it finds the optimal
 reinforcement learning solution given a fixed environment and utility
 function. The difference is that AIXI^tl does a brute force search of all
 machines up to length l for time t each, so it run in O(t 2^l) time. A
 Goedel machine achieves the same result more efficiently through a series
 of self improvments by proving that each proposed modification (including
 modifications to its own proof search code) is a actual improvement. It
 does this by using an instruction set such that it is impossible to
 construct incorrect proof verification code.

 What I am looking for is unbounded RSI capable of increasing
 intelligence.
 A Goedel machine doesn't do this because once it finds a solution, it
 stops. This is the same problem as a chess playing program that plays
 randomly modified copies of itself in death matches. At some point, it
 completely solves the chess problem and stops improving.

 Ideally we should use a scalable test for intelligence such as Legg and
 Hutter's universal intelligence, which measures expected accumulated
 reward over a Solomonoff distribution of environments (random programs).
 We can't compute this exactly because it requires testing an infinite
 number of environments, but we can approximate it to arbitrary precision
 by randomly sampling environments.

 RSI would require a series of increasingly complex test environments
 because otherwise there is an exact solution such that RSI would stop
 once
 found. For any environment with Kolmogorov complexity l, and agent can
 guess all 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Vladimir Nesov
On Wed, Aug 27, 2008 at 8:32 PM, Mark Waser [EMAIL PROTECTED] wrote:
 But, how does your description not correspond to giving the AGI the
 goals of being helpful and not harmful? In other words, what more does
 it do than simply try for these? Does it pick goals randomly such that
 they conflict only minimally with these?

 Actually, my description gave the AGI four goals: be helpful, don't be
 harmful, learn, and keep moving.

 Learn, all by itself, is going to generate an infinite number of subgoals.
 Learning subgoals will be picked based upon what is most likely to learn the
 most while not being harmful.

 (and, by the way, be helpful and learn should both generate a
 self-protection sub-goal  in short order with procreation following
 immediately behind)

 Arguably, be helpful would generate all three of the other goals but
 learning and not being harmful without being helpful is a *much* better
 goal-set for a novice AI to prevent accidents when the AI thinks it is
 being helpful.  In fact, I've been tempted at times to entirely drop the be
 helpful since the other two will eventually generate it with a lessened
 probability of trying-to-be-helpful accidents.

 Don't be harmful by itself will just turn the AI off.

 The trick is that there needs to be a balance between goals.  Any single
 goal intelligence is likely to be lethal even if that goal is to help
 humanity.

 Learn, do no harm, help.  Can anyone come up with a better set of goals?
 (and, once again, note that learn does *not* override the other two -- there
 is meant to be a balance between the three).


And AGI will just read the command, help, 'h'-'e'-'l'-'p', and will
know exactly what to do, and will be convinced to do it. To implement
this simple goal, you need to somehow communicate its functional
structure in the AGI, this won't just magically happen. Don't talk
about AGI as if it was a human, think about how exactly to implement
what you want. Today's rant on Overcoming Bias applies fully to such
suggestions ( http://www.overcomingbias.com/2008/08/dreams-of-ai-de.html
).


-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Abram Demski
Mark,

OK, I take up the challenge. Here is a different set of goal-axioms:

-Good is a property of some entities.
-Maximize good in the world.
-A more-good entity is usually more likely to cause goodness than a
less-good entity.
-A more-good entity is often more likely to cause pleasure than a
less-good entity.
-Self is the entity that causes my actions.
-An entity with properties similar to self is more likely to be good.

Pleasure, unlike goodness, is directly observable. It comes from many
sources. For example:
-Learning is pleasurable.
-A full battery is pleasurable (if relevant).
-Perhaps the color of human skin is pleasurable in and of itself.
(More specifically, all skin colors of any existing race.)
-Perhaps also the sound of a human voice is pleasurable.
-Other things may be pleasurable depending on what we initially want
the AI to enjoy doing.

So, the definition if good is highly probabilistic, and the system's
inferences about goodness will depend on its experiences; but pleasure
can be directly observed, and the pleasure-mechanisms remain fixed.

On Wed, Aug 27, 2008 at 12:32 PM, Mark Waser [EMAIL PROTECTED] wrote:
 But, how does your description not correspond to giving the AGI the
 goals of being helpful and not harmful? In other words, what more does
 it do than simply try for these? Does it pick goals randomly such that
 they conflict only minimally with these?

 Actually, my description gave the AGI four goals: be helpful, don't be
 harmful, learn, and keep moving.

 Learn, all by itself, is going to generate an infinite number of subgoals.
 Learning subgoals will be picked based upon what is most likely to learn the
 most while not being harmful.

 (and, by the way, be helpful and learn should both generate a
 self-protection sub-goal  in short order with procreation following
 immediately behind)

 Arguably, be helpful would generate all three of the other goals but
 learning and not being harmful without being helpful is a *much* better
 goal-set for a novice AI to prevent accidents when the AI thinks it is
 being helpful.  In fact, I've been tempted at times to entirely drop the be
 helpful since the other two will eventually generate it with a lessened
 probability of trying-to-be-helpful accidents.

 Don't be harmful by itself will just turn the AI off.

 The trick is that there needs to be a balance between goals.  Any single
 goal intelligence is likely to be lethal even if that goal is to help
 humanity.

 Learn, do no harm, help.  Can anyone come up with a better set of goals?
 (and, once again, note that learn does *not* override the other two -- there
 is meant to be a balance between the three).

 - Original Message - From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Wednesday, August 27, 2008 11:52 AM
 Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
 AGI (was Re: [agi] The Necessity of Embodiment))


 Mark,

 I agree that we are mired 5 steps before that; after all, AGI is not
 solved yet, and it is awfully hard to design prefab concepts in a
 knowledge representation we know nothing about!

 But, how does your description not correspond to giving the AGI the
 goals of being helpful and not harmful? In other words, what more does
 it do than simply try for these? Does it pick goals randomly such that
 they conflict only minimally with these?

 --Abram

 On Wed, Aug 27, 2008 at 11:09 AM, Mark Waser [EMAIL PROTECTED] wrote:

 It is up to humans to define the goals of an AGI, so that it will do
 what
 we want it to do.

 Why must we define the goals of an AGI?  What would be wrong with setting
 it
 off with strong incentives to be helpful, even stronger incentives to not
 be
 harmful, and let it chart it's own course based upon the vagaries of the
 world?  Let it's only hard-coded goal be to keep it's satisfaction above
 a
 certain level with helpful actions increasing satisfaction, harmful
 actions
 heavily decreasing satisfaction; learning increasing satisfaction, and
 satisfaction naturally decaying over time so as to promote action . . . .

 Seems to me that humans are pretty much coded that way (with evolution's
 additional incentives of self-defense and procreation).  The real trick
 of
 the matter is defining helpful and harmful clearly but everyone is still
 mired five steps before that.


 - Original Message -
 From: Matt Mahoney
 To: agi@v2.listbox.com
 Sent: Wednesday, August 27, 2008 10:52 AM
 Subject: AGI goals (was Re: Information theoretic approaches to AGI (was
 Re:
 [agi] The Necessity of Embodiment))
 An AGI will not design its goals. It is up to humans to define the goals
 of
 an AGI, so that it will do what we want it to do.

 Unfortunately, this is a problem. We may or may not be successful in
 programming the goals of AGI to satisfy human goals. If we are not
 successful, then AGI will be useless at best and dangerous at worst. If
 we
 are successful, then we are doomed because human goals 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser

Hi,

   A number of problems unfortunately . . . .


-Learning is pleasurable.


. . . . for humans.  We can choose whether to make it so for machines or 
not.  Doing so would be equivalent to setting a goal of learning.



-Other things may be pleasurable depending on what we initially want
the AI to enjoy doing.


   See . . . all you've done here is pushed goal-setting to 
pleasure-setting . . . .


= = = = =

   Further, if you judge goodness by pleasure, you'll probably create an 
AGI whose shortest path-to-goal is to wirehead the universe (which I 
consider to be a seriously suboptimal situation - YMMV).





- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 2:25 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Mark,

OK, I take up the challenge. Here is a different set of goal-axioms:

-Good is a property of some entities.
-Maximize good in the world.
-A more-good entity is usually more likely to cause goodness than a
less-good entity.
-A more-good entity is often more likely to cause pleasure than a
less-good entity.
-Self is the entity that causes my actions.
-An entity with properties similar to self is more likely to be good.

Pleasure, unlike goodness, is directly observable. It comes from many
sources. For example:
-Learning is pleasurable.
-A full battery is pleasurable (if relevant).
-Perhaps the color of human skin is pleasurable in and of itself.
(More specifically, all skin colors of any existing race.)
-Perhaps also the sound of a human voice is pleasurable.
-Other things may be pleasurable depending on what we initially want
the AI to enjoy doing.

So, the definition if good is highly probabilistic, and the system's
inferences about goodness will depend on its experiences; but pleasure
can be directly observed, and the pleasure-mechanisms remain fixed.

On Wed, Aug 27, 2008 at 12:32 PM, Mark Waser [EMAIL PROTECTED] wrote:

But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?


Actually, my description gave the AGI four goals: be helpful, don't be
harmful, learn, and keep moving.

Learn, all by itself, is going to generate an infinite number of 
subgoals.
Learning subgoals will be picked based upon what is most likely to learn 
the

most while not being harmful.

(and, by the way, be helpful and learn should both generate a
self-protection sub-goal  in short order with procreation following
immediately behind)

Arguably, be helpful would generate all three of the other goals but
learning and not being harmful without being helpful is a *much* better
goal-set for a novice AI to prevent accidents when the AI thinks it is
being helpful.  In fact, I've been tempted at times to entirely drop the 
be

helpful since the other two will eventually generate it with a lessened
probability of trying-to-be-helpful accidents.

Don't be harmful by itself will just turn the AI off.

The trick is that there needs to be a balance between goals.  Any single
goal intelligence is likely to be lethal even if that goal is to help
humanity.

Learn, do no harm, help.  Can anyone come up with a better set of goals?
(and, once again, note that learn does *not* override the other two --  
there

is meant to be a balance between the three).

- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 11:52 AM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches 
to

AGI (was Re: [agi] The Necessity of Embodiment))



Mark,

I agree that we are mired 5 steps before that; after all, AGI is not
solved yet, and it is awfully hard to design prefab concepts in a
knowledge representation we know nothing about!

But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?

--Abram

On Wed, Aug 27, 2008 at 11:09 AM, Mark Waser [EMAIL PROTECTED] 
wrote:


It is up to humans to define the goals of an AGI, so that it will do
what
we want it to do.


Why must we define the goals of an AGI?  What would be wrong with 
setting

it
off with strong incentives to be helpful, even stronger incentives to 
not

be
harmful, and let it chart it's own course based upon the vagaries of 
the
world?  Let it's only hard-coded goal be to keep it's satisfaction 
above

a
certain level with helpful actions increasing satisfaction, harmful
actions
heavily decreasing satisfaction; learning increasing satisfaction, and
satisfaction naturally decaying over time so as to promote action . . . 
.


Seems to me that humans are pretty 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Abram Demski
Mark,

The main motivation behind my setup was to avoid the wirehead
scenario. That is why I make the explicit goodness/pleasure
distinction. Whatever good is, it cannot be something directly
observable, or the AI will just wirehead itself (assuming it gets
intelligent enough to do so, of course). But, goodness cannot be
completely unobservable, or the AI will have no idea what it should
do.

So, the AI needs to have a concept of external goodness, with a weak
probabilistic correlation to its directly observable pleasure. That
way, the system will go after pleasant things, but won't be able to
fool itself with things that are maximally pleasant. For example, if
it were to consider rewiring its visual circuits to see only
skin-color, it would not like the idea, because it would know that
such a move would make it less able to maximize goodness in general.
(It would know that seeing only tan does not mean that the entire
world is made of pure goodness.) An AI that was trying to maximize
pleasure would see nothing wrong with self-stimulation of this sort.

So, I think that pushing the problem of goal-setting back to
pleasure-setting is very useful for avoiding certain types of
undesirable behavior.

By the way, where does this term wireheading come from? I assume
from context that it simply means self-stimulation.

-Abram Demski

On Wed, Aug 27, 2008 at 2:58 PM, Mark Waser [EMAIL PROTECTED] wrote:
 Hi,

   A number of problems unfortunately . . . .

 -Learning is pleasurable.

 . . . . for humans.  We can choose whether to make it so for machines or
 not.  Doing so would be equivalent to setting a goal of learning.

 -Other things may be pleasurable depending on what we initially want
 the AI to enjoy doing.

   See . . . all you've done here is pushed goal-setting to pleasure-setting
 . . . .

 = = = = =

   Further, if you judge goodness by pleasure, you'll probably create an AGI
 whose shortest path-to-goal is to wirehead the universe (which I consider to
 be a seriously suboptimal situation - YMMV).




 - Original Message - From: Abram Demski [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Wednesday, August 27, 2008 2:25 PM
 Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
 AGI (was Re: [agi] The Necessity of Embodiment))


 Mark,

 OK, I take up the challenge. Here is a different set of goal-axioms:

 -Good is a property of some entities.
 -Maximize good in the world.
 -A more-good entity is usually more likely to cause goodness than a
 less-good entity.
 -A more-good entity is often more likely to cause pleasure than a
 less-good entity.
 -Self is the entity that causes my actions.
 -An entity with properties similar to self is more likely to be good.

 Pleasure, unlike goodness, is directly observable. It comes from many
 sources. For example:
 -Learning is pleasurable.
 -A full battery is pleasurable (if relevant).
 -Perhaps the color of human skin is pleasurable in and of itself.
 (More specifically, all skin colors of any existing race.)
 -Perhaps also the sound of a human voice is pleasurable.
 -Other things may be pleasurable depending on what we initially want
 the AI to enjoy doing.

 So, the definition if good is highly probabilistic, and the system's
 inferences about goodness will depend on its experiences; but pleasure
 can be directly observed, and the pleasure-mechanisms remain fixed.

 On Wed, Aug 27, 2008 at 12:32 PM, Mark Waser [EMAIL PROTECTED] wrote:

 But, how does your description not correspond to giving the AGI the
 goals of being helpful and not harmful? In other words, what more does
 it do than simply try for these? Does it pick goals randomly such that
 they conflict only minimally with these?

 Actually, my description gave the AGI four goals: be helpful, don't be
 harmful, learn, and keep moving.

 Learn, all by itself, is going to generate an infinite number of
 subgoals.
 Learning subgoals will be picked based upon what is most likely to learn
 the
 most while not being harmful.

 (and, by the way, be helpful and learn should both generate a
 self-protection sub-goal  in short order with procreation following
 immediately behind)

 Arguably, be helpful would generate all three of the other goals but
 learning and not being harmful without being helpful is a *much* better
 goal-set for a novice AI to prevent accidents when the AI thinks it is
 being helpful.  In fact, I've been tempted at times to entirely drop the
 be
 helpful since the other two will eventually generate it with a lessened
 probability of trying-to-be-helpful accidents.

 Don't be harmful by itself will just turn the AI off.

 The trick is that there needs to be a balance between goals.  Any single
 goal intelligence is likely to be lethal even if that goal is to help
 humanity.

 Learn, do no harm, help.  Can anyone come up with a better set of goals?
 (and, once again, note that learn does *not* override the other two --
  there
 is meant to be a balance between the 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread BillK
On Wed, Aug 27, 2008 at 8:43 PM, Abram Demski  wrote:
snip
 By the way, where does this term wireheading come from? I assume
 from context that it simply means self-stimulation.


Science Fiction novels.

http://en.wikipedia.org/wiki/Wirehead
In Larry Niven's Known Space stories, a wirehead is someone who has
been fitted with an electronic brain implant (called a droud in the
stories) to stimulate the pleasure centers of their brain.

In 2006, The Guardian reported that trials of Deep brain stimulation
with electric current, via wires inserted into the brain, had
successfully lifted the mood of depression sufferers.[1] This is
exactly the method used by wireheads in the earlier Niven stories
(such as the 'Gil the Arm' story Death By Ectasy).

In the Shaper/Mechanist stories of Bruce Sterling, wirehead is the
Mechanist term for a human who has given up corporeal existence and
become an infomorph.
--


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Matt Mahoney
See also http://wireheading.com/

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: BillK [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 4:50:56 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))

On Wed, Aug 27, 2008 at 8:43 PM, Abram Demski  wrote:
snip
 By the way, where does this term wireheading come from? I assume
 from context that it simply means self-stimulation.


Science Fiction novels.

http://en.wikipedia.org/wiki/Wirehead
In Larry Niven's Known Space stories, a wirehead is someone who has
been fitted with an electronic brain implant (called a droud in the
stories) to stimulate the pleasure centers of their brain.

In 2006, The Guardian reported that trials of Deep brain stimulation
with electric current, via wires inserted into the brain, had
successfully lifted the mood of depression sufferers.[1] This is
exactly the method used by wireheads in the earlier Niven stories
(such as the 'Gil the Arm' story Death By Ectasy).

In the Shaper/Mechanist stories of Bruce Sterling, wirehead is the
Mechanist term for a human who has given up corporeal existence and
become an infomorph.
--


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Matt Mahoney
Mark Waser [EMAIL PROTECTED] wrote:

 All rational goal-seeking agents must have a mental state of maximum utility 
 where 
any thought or perception would be unpleasant because it would result in a 
different state.

I'd love to see you attempt to prove the above statement.
 
What if there are several states with utility equal to or very close to the 
maximum?

Then you will be indifferent as to whether you stay in one state or move 
between them.

What if the utility of the state decreases the longer that you are in it 
(something that is *very* true of human 
beings)?

If you are aware of the passage of time, then you are not staying in the same 
state.

What if uniqueness raises the utility of any new state sufficient 
that there will always be states that are better than the current state (since 
experiencing uniqueness normally improves fitness through learning, 
etc)?

Then you are not rational because your utility function does not define a total 
order. If you prefer A to B and B to C and C to A, as in the case you 
described, then you can be exploited. If you are rational and you have a finite 
number of states, then there is at least one state for which there is no better 
state. The human brain is certainly finite, and has at most 2^(10^15) states.

 -- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Charles Hixson

Matt Mahoney wrote:
An AGI will not design its goals. It is up to humans to define the 
goals of an AGI, so that it will do what we want it to do.
Are you certain that this is the optimal approach?  To me it seems more 
promising to design the motives, and to allow the AGI to design it's own 
goals to satisfy those motives.  This provides less fine grained control 
over the AGI, but I feel that a fine-grained control would be 
counter-productive.


To me the difficulty is designing the motives of the AGI in such a way 
that they will facilitate human life, when they must be implanted in an 
AGI that currently has no concept of an external universe, much less any 
particular classes of inhabitant therein.  The only (partial) solution 
that I've been able to come up with so far (i.e., identify, not design) 
is based around imprinting.  This is fine for the first generation 
(probably, if everything is done properly), but it's not clear that it 
would be fine for the second generation et seq.  For this reason RSI is 
very important.  It allows all succeeding generations to be derived from 
the first by cloning, which would preserve the initial imprints.


Unfortunately, this is a problem. We may or may not be successful in 
programming the goals of AGI to satisfy human goals. If we are not 
successful, ... unpleasant because it would result in a different state.
 
-- Matt Mahoney, [EMAIL PROTECTED]


Failure is an extreme danger, but it's not only failure to design safely 
that's a danger.  Failure to design a successful AGI at all could be 
nearly as great a danger.  Society has become too complex to be safely 
managed by the current approaches...and things aren't getting any simpler.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Matt Mahoney
Abram Demski [EMAIL PROTECTED] wrote:
First, I do not think it is terribly difficult to define a Goedel
machine that does not halt. It interacts with its environment, and
there is some utility value attached to this interaction, and it
attempts to rewrite its code to maximize this utility.

It's not that the machine halts, but that it makes no further improvements once 
the best solution is found. This might not be a practical concern if the 
environment is very complex.

However, I doubt that a Goedel machine could even be built. Legg showed [1] 
that Goedel incompleteness is ubiquitous. To paraphrase, beyond some low level 
of complexity, you can't prove anything. Perhaps this is the reason we have not 
(AFAIK) built a software model, even for very simple sets of axioms.

If we resort to probabilistic evidence of improvement rather than proofs, then 
it is no longer a Goedel machine, and I think we would need experimental 
verification of RSI. Random modifications of code are much more likely to be 
harmful than helpful, so we would need to show that improvements could be 
detected with a very low false positive rate.

1. http://www.vetta.org/documents/IDSIA-12-06-1.pdf


 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 11:40:24 AM
Subject: Re: Goedel machines (was Re: Information theoretic approaches to AGI 
(was Re: [agi] The Necessity of Embodiment))

Matt,

Thanks for the reply. There are 3 reasons that I can think of for
calling Goedel machines bounded:

1. As you assert, once a solution is found, it stops.
2. It will be on a finite computer, so it will eventually reach the
one best version of itself that it can reach.
3. It can only make provably correct steps, which is very limiting
thanks to Godel's incompleteness theorem.

I'll try to argue that each of these limits can be overcome in
principle, and we'll see if the result satisfies your RSI criteria.

First, I do not think it is terribly difficult to define a Goedel
machine that does not halt. It interacts with its environment, and
there is some utility value attached to this interaction, and it
attempts to rewrite its code to maximize this utility.

The second and third need to be tackled together, because the main
reason that a Goedel machine can't improve its own hardware is because
there is uncertainty involved, so it would never be provably better.
There is always some chance of hardware malfunction. So, I think it is
necessary to grant the possibility of modifications that are merely
very probably correct. Once this is done, 2 and 3 fall fairly easily,
assuming that the machine begins life with a good probabilistic
learning system. That is a big assumption, but we can grant it for the
moment I think?

For the sake of concreteness, let's say that the utility value is some
(probably very complex) attempt to logically describe Eliezer-style
Friendliness, and that the probabilistic learning system is an
approximation of AIXI (which the system will improve over time along
with everything else). (These two choices don't reflect my personal
tastes, they are just examples.)

By tweaking the allowances the system makes, we might either have a
slow self-improver that is, say, 99.999% probable to only improve
itself in the next 100 years, or a faster self-improver that is 50%
guaranteed.

Does this satisfy your criteria?

On Wed, Aug 27, 2008 at 9:14 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Abram Demski [EMAIL PROTECTED] wrote:
Matt,

What is your opinion on Goedel machines?

 http://www.idsia.ch/~juergen/goedelmachine.html


 Thanks for the link. If I understand correctly, this is a form of bounded 
 RSI, so it could not lead to a singularity. A Goedel machine is functionally 
 equivalent to AIXI^tl in that it finds the optimal reinforcement learning 
 solution given a fixed environment and utility function. The difference is 
 that AIXI^tl does a brute force search of all machines up to length l for 
 time t each, so it run in O(t 2^l) time. A Goedel machine achieves the same 
 result more efficiently through a series of self improvments by proving that 
 each proposed modification (including modifications to its own proof search 
 code) is a actual improvement. It does this by using an instruction set such 
 that it is impossible to construct incorrect proof verification code.

 What I am looking for is unbounded RSI capable of increasing intelligence. A 
 Goedel machine doesn't do this because once it finds a solution, it stops. 
 This is the same problem as a chess playing program that plays randomly 
 modified copies of itself in death matches. At some point, it completely 
 solves the chess problem and stops improving.

 Ideally we should use a scalable test for intelligence such as Legg and 
 Hutter's universal intelligence, which measures expected accumulated reward 
 over a Solomonoff distribution of environments (random 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser
What if the utility of the state decreases the longer that you are in it 
(something that is *very* true of human

beings)?
If you are aware of the passage of time, then you are not staying in the 
same state.


I have to laugh.  So you agree that all your arguments don't apply to 
anything that is aware of the passage of time?  That makes them really 
useful, doesn't it.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser

Hi,

   I think that I'm missing some of your points . . . .


Whatever good is, it cannot be something directly
observable, or the AI will just wirehead itself (assuming it gets
intelligent enough to do so, of course).


I don't understand this unless you mean by directly observable that the 
definition is observable and changeable.  If I define good as making all 
humans happy without modifying them, how would the AI wirehead itself?  What 
am I missing here?



So, the AI needs to have a concept of external goodness, with a weak
probabilistic correlation to its directly observable pleasure.


I agree with the concept of external goodness but why does the correlation 
between external goodness and it's pleasure have to be low?  Why can't 
external goodness directly cause pleasure?  Clearly, it shouldn't believe 
that it's pleasure causes external goodness (that would be reversing cause 
and effect and an obvious logic error).


   Mark

P.S.  I notice that several others answered your wirehead query so I won't 
belabor the point.  :-)



- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 3:43 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Mark,

The main motivation behind my setup was to avoid the wirehead
scenario. That is why I make the explicit goodness/pleasure
distinction. Whatever good is, it cannot be something directly
observable, or the AI will just wirehead itself (assuming it gets
intelligent enough to do so, of course). But, goodness cannot be
completely unobservable, or the AI will have no idea what it should
do.

So, the AI needs to have a concept of external goodness, with a weak
probabilistic correlation to its directly observable pleasure. That
way, the system will go after pleasant things, but won't be able to
fool itself with things that are maximally pleasant. For example, if
it were to consider rewiring its visual circuits to see only
skin-color, it would not like the idea, because it would know that
such a move would make it less able to maximize goodness in general.
(It would know that seeing only tan does not mean that the entire
world is made of pure goodness.) An AI that was trying to maximize
pleasure would see nothing wrong with self-stimulation of this sort.

So, I think that pushing the problem of goal-setting back to
pleasure-setting is very useful for avoiding certain types of
undesirable behavior.

By the way, where does this term wireheading come from? I assume
from context that it simply means self-stimulation.

-Abram Demski

On Wed, Aug 27, 2008 at 2:58 PM, Mark Waser [EMAIL PROTECTED] wrote:

Hi,

  A number of problems unfortunately . . . .


-Learning is pleasurable.


. . . . for humans.  We can choose whether to make it so for machines or
not.  Doing so would be equivalent to setting a goal of learning.


-Other things may be pleasurable depending on what we initially want
the AI to enjoy doing.


  See . . . all you've done here is pushed goal-setting to 
pleasure-setting

. . . .

= = = = =

  Further, if you judge goodness by pleasure, you'll probably create an 
AGI
whose shortest path-to-goal is to wirehead the universe (which I consider 
to

be a seriously suboptimal situation - YMMV).




- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 2:25 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches 
to

AGI (was Re: [agi] The Necessity of Embodiment))



Mark,

OK, I take up the challenge. Here is a different set of goal-axioms:

-Good is a property of some entities.
-Maximize good in the world.
-A more-good entity is usually more likely to cause goodness than a
less-good entity.
-A more-good entity is often more likely to cause pleasure than a
less-good entity.
-Self is the entity that causes my actions.
-An entity with properties similar to self is more likely to be good.

Pleasure, unlike goodness, is directly observable. It comes from many
sources. For example:
-Learning is pleasurable.
-A full battery is pleasurable (if relevant).
-Perhaps the color of human skin is pleasurable in and of itself.
(More specifically, all skin colors of any existing race.)
-Perhaps also the sound of a human voice is pleasurable.
-Other things may be pleasurable depending on what we initially want
the AI to enjoy doing.

So, the definition if good is highly probabilistic, and the system's
inferences about goodness will depend on its experiences; but pleasure
can be directly observed, and the pleasure-mechanisms remain fixed.

On Wed, Aug 27, 2008 at 12:32 PM, Mark Waser [EMAIL PROTECTED] 
wrote:


But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Matt Mahoney
Mark Waser [EMAIL PROTECTED] wrote:

What if the utility of the state decreases the longer that you are in it 
(something that is *very* true of human
 beings)?
 If you are aware of the passage of time, then you are not staying in the 
 same state.

I have to laugh.  So you agree that all your arguments don't apply to 
anything that is aware of the passage of time?  That makes them really 
useful, doesn't it.

No, the state of ultimate bliss that you, I, and all other rational, goal 
seeking agents seek is a mental state in which nothing perceptible happens. 
Without thought or sensation, you would be unaware of the passage of time, or 
of anything else. If you are aware of time then you are either not in this 
state yet, or are leaving it.

You may say that is not what you want, but only because you are unaware of the 
possibilities of reprogramming your brain. It is like being opposed to drugs or 
wireheading. Once you experience it, you can't resist.

 -- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Matt Mahoney
Goals and motives are the same thing, in the sense that I mean them.
We want the AGI to want to do what we want it to do.

Failure is an extreme danger, but it's not only failure to design safely 
that's a danger.  Failure to design a successful AGI at all could be 
nearly as great a danger.  Society has become too complex to be safely 
managed by the current approaches...and things aren't getting any simpler.


No, technology is the source of complexity, not the cure for it. But that is 
what we want. Life, health, happiness, freedom from work. AGI will cost $1 
quadrillion to build, but we will build it because it is worth that much. And 
then it will kill us, not against our will, but because we want to live in 
simulated worlds with magic genies.
 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Charles Hixson [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 7:16:53 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))

Matt Mahoney wrote:
 An AGI will not design its goals. It is up to humans to define the 
 goals of an AGI, so that it will do what we want it to do.
Are you certain that this is the optimal approach?  To me it seems more 
promising to design the motives, and to allow the AGI to design it's own 
goals to satisfy those motives.  This provides less fine grained control 
over the AGI, but I feel that a fine-grained control would be 
counter-productive.

To me the difficulty is designing the motives of the AGI in such a way 
that they will facilitate human life, when they must be implanted in an 
AGI that currently has no concept of an external universe, much less any 
particular classes of inhabitant therein.  The only (partial) solution 
that I've been able to come up with so far (i.e., identify, not design) 
is based around imprinting.  This is fine for the first generation 
(probably, if everything is done properly), but it's not clear that it 
would be fine for the second generation et seq.  For this reason RSI is 
very important.  It allows all succeeding generations to be derived from 
the first by cloning, which would preserve the initial imprints.

 Unfortunately, this is a problem. We may or may not be successful in 
 programming the goals of AGI to satisfy human goals. If we are not 
 successful, ... unpleasant because it would result in a different state.
  
 -- Matt Mahoney, [EMAIL PROTECTED]

Failure is an extreme danger, but it's not only failure to design safely 
that's a danger.  Failure to design a successful AGI at all could be 
nearly as great a danger.  Society has become too complex to be safely 
managed by the current approaches...and things aren't getting any simpler.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-27 Thread John LaMuth
Matt

You are just goin' to have to take my word for it all ...

Besides, my ideas stand alone apart from any sheepskin rigamarole ...

BTW, please don't throw out any more grand challenges if you are just goin' to 
play the TEASE about addressing the relevant issues.

John LaMuth

http://www.charactervalues.com 
http://www.charactervalues.org 
http://www.charactervalues.net 
http://www.ethicalvalues.com 
http://www.ethicalvalues.info 
http://www.emotionchip.net 
http://www.global-solutions.org 
http://www.world-peace.org 
http://www.angelfire.com/rnb/fairhaven/schematics.html 
http://www.angelfire.com/rnb/fairhaven/behaviorism.html 
http://www.forebrain.org

  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Wednesday, August 27, 2008 7:55 AM
  Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)


  John, are any of your peer-reviewed papers online? I can't seem to find 
them...


  -- Matt Mahoney, [EMAIL PROTECTED]




  - Original Message 
  From: John LaMuth [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Tuesday, August 26, 2008 2:35:10 AM
  Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)


  Matt

  Below is a sampling of my peer reviewed conference presentations on my 
background ethical theory ...

  This should elevate me above the common crackpot

  #

  Talks 

a.. Presentation of a paper at ISSS 2000 (International Society for Systems 
Sciences) Conference in Toronto, Canada on various aspects of the new science 
of Powerplay Politics.
· Toward a Science of Consciousness: TUCSON April 8–12, 2002 Tucson 
Convention Center, Tucson,  

   Arizona-sponsored by the Center for Consciousness 
Studies-University of Arizona (poster presentation).

·  John presented a poster at the 8th International Tsukaba 
Bioethics Conference at Tsukaba, 

Japan on Feb. 15 to 17,  2003. 

·  John has presented his paper – “The Communicational Factors 
Underlying the Mental   

Disorders” at the 2006 Annual Conf. of the Western 
Psychological Association at Palm Springs, CA 

  Honors

a.. Honors Diploma for Research in Biological Sciences (June 1977) - Univ. 
of Calif. Irvine. 
b.. John is a member of the APA and the American Philosophical Association.
  LaMuth, J. E. (1977). The Development of the Forebrain as an Elementary 
Function of

 the Parameters of Input Specificity and Phylogenetic Age.

  J. U-grad Rsch: Bio. Sci. U. C. Irvine. (6): 274-294.



  LaMuth, J. E. (2000). A Holistic Model of Ethical Behavior Based Upon a 
Metaperspectival 

  Hierarchy of the Traditional Groupings of Virtue, 

  Values,  Ideals. Proceedings of the 44th Annual World Congress 
for the Int.

  Society for the Systems Sciences – Toronto.

 

  LaMuth, J. E. (2003). Inductive Inference Affective Language Analyzer 

  Simulating AI. - US Patent # 6,587,846.

  LaMuth, J. E. (2004). Behavioral Foundations for the Behaviourome / Mind 
Mapping 

  Project. Proceedings for the  Eighth International Tsukuba 
Bioethics 

  Roundtable,Tsukuba, Japan.

  LaMuth, J. E. (2005). A Diagnostic Classification of the Emotions: A 
Three-Digit Coding 

  System for Affective Language. Lucerne Valley: Fairhaven.



  LaMuth, J. E. (2007). Inductive Inference Affective Language Analyzer 

  Simulating Transitional AI. - US Patent # 7,236,963.



  **

  Although I currently have no working model, I am collaborating on a working 
prototype.



  I was responding to your challenge for ...an example of a mathematical, 
software, biological, or physical example of RSI, or at least a plausible 
argument that one could be created



  I feel I have proposed a plausible argument, and considering the great stakes 
involved concerning ethical safeguards for AI, an avenue worthy of critique ...



  More on this in the last half of )
  www.angelfire.com/rnb/fairhaven/specs.html 


  John LaMuth

  www.ethicalvalues.com 








- Original Message - 
From: Matt Mahoney 
To: agi@v2.listbox.com 
Sent: Monday, August 25, 2008 7:30 AM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)


John, I have looked at your patent and various web pages. You list a lot of 
nice sounding ethical terms (honor, love, hope, peace, etc) but give no details 
on how to implement them. You have already admitted that you have no 
experimental results, haven't actually built anything, and have no other 
results such as refereed conference or journal papers describing your system. 
If I am wrong about this, please let me know.


-- Matt Mahoney, [EMAIL PROTECTED] 



Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-26 Thread John LaMuth
Matt

Below is a sampling of my peer reviewed conference presentations on my 
background ethical theory ...

This should elevate me above the common crackpot

#

Talks

  a.. Presentation of a paper at ISSS 2000 (International Society for Systems 
Sciences) Conference in Toronto, Canada on various aspects of the new science 
of Powerplay Politics.
  · Toward a Science of Consciousness: TUCSON April 8-12, 2002 Tucson 
Convention Center, Tucson,  

 Arizona-sponsored by the Center for Consciousness 
Studies-University of Arizona (poster presentation).

  ·  John presented a poster at the 8th International Tsukaba Bioethics 
Conference at Tsukaba, 

  Japan on Feb. 15 to 17,  2003. 

  ·  John has presented his paper - The Communicational Factors 
Underlying the Mental   

  Disorders at the 2006 Annual Conf. of the Western Psychological 
Association at Palm Springs, CA 

Honors

  a.. Honors Diploma for Research in Biological Sciences (June 1977) - Univ. of 
Calif. Irvine.
  b.. John is a member of the APA and the American Philosophical Association.
LaMuth, J. E. (1977). The Development of the Forebrain as an Elementary 
Function of

   the Parameters of Input Specificity and Phylogenetic Age.

J. U-grad Rsch: Bio. Sci. U. C. Irvine. (6): 274-294.

  

LaMuth, J. E. (2000). A Holistic Model of Ethical Behavior Based Upon a 
Metaperspectival 

Hierarchy of the Traditional Groupings of Virtue, 

Values,  Ideals. Proceedings of the 44th Annual World Congress for 
the Int.

Society for the Systems Sciences - Toronto.

   

LaMuth, J. E. (2003). Inductive Inference Affective Language Analyzer 

Simulating AI. - US Patent # 6,587,846.

LaMuth, J. E. (2004). Behavioral Foundations for the Behaviourome / Mind 
Mapping 

Project. Proceedings for the  Eighth International Tsukuba 
Bioethics 

Roundtable,Tsukuba, Japan.

LaMuth, J. E. (2005). A Diagnostic Classification of the Emotions: A 
Three-Digit Coding 

System for Affective Language. Lucerne Valley: Fairhaven.



LaMuth, J. E. (2007). Inductive Inference Affective Language Analyzer 

Simulating Transitional AI. - US Patent # 7,236,963.



**

Although I currently have no working model, I am collaborating on a working 
prototype.



I was responding to your challenge for ...an example of a mathematical, 
software, biological, or physical example of RSI, or at least a plausible 
argument that one could be created



I feel I have proposed a plausible argument, and considering the great stakes 
involved concerning ethical safeguards for AI, an avenue worthy of critique ...



More on this in the last half of )
www.angelfire.com/rnb/fairhaven/specs.html 


John LaMuth

www.ethicalvalues.com 








  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Monday, August 25, 2008 7:30 AM
  Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)


  John, I have looked at your patent and various web pages. You list a lot of 
nice sounding ethical terms (honor, love, hope, peace, etc) but give no details 
on how to implement them. You have already admitted that you have no 
experimental results, haven't actually built anything, and have no other 
results such as refereed conference or journal papers describing your system. 
If I am wrong about this, please let me know.


  -- Matt Mahoney, [EMAIL PROTECTED]




  - Original Message 
  From: John LaMuth [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Sunday, August 24, 2008 11:21:30 PM
  Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)



  - Original Message - 
  From: Matt Mahoney [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Sunday, August 24, 2008 2:46 PM
  Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)


  I have challenged this list as well as the singularity and SL4 lists to come 
up with an example of a mathematical, software, biological, or physical example 
of RSI, or at least a plausible argument that one could be created, and nobody 
has. To qualify, an agent has to modify itself or create a more intelligent 
copy of itself according to an intelligence test chosen by the original. The 
following are not examples of RSI:
   
   1. Evolution of life, including humans.
   2. Emergence of language, culture, writing, communication technology, and 
computers.

   -- Matt Mahoney, [EMAIL PROTECTED]
   
  ###
  *

  Matt

  Where have you been for the last 2 months ??

  I had been talking then about my 2 US Patents for ethical/friendly AI
  along lines of a recursive simulation targeting 

Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-26 Thread Valentina Poletti
Thanks very much for the info. I found those articles very interesting.
Actually though this is not quite what I had in mind with the term
information-theoretic approach. I wasn't very specific, my bad. What I am
looking for is a a theory behind the actual R itself. These approaches
(correnct me if I'm wrong) give an r-function for granted and work from
that. In real life that is not the case though. What I'm looking for is how
the AGI will create that function. Because the AGI is created by humans,
some sort of direction will be given by the humans creating them. What kind
of direction, in mathematical terms, is my question. In other words I'm
looking for a way to mathematically define how the AGI will mathematically
define its goals.

Valentina


On 8/23/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Valentina Poletti [EMAIL PROTECTED] wrote:
  I was wondering why no-one had brought up the information-theoretic
 aspect of this yet.

 It has been studied. For example, Hutter proved that the optimal strategy
 of a rational goal seeking agent in an unknown computable environment is
 AIXI: to guess that the environment is simulated by the shortest program
 consistent with observation so far [1]. Legg and Hutter also propose as a
 measure of universal intelligence the expected reward over a Solomonoff
 distribution of environments [2].

 These have profound impacts on AGI design. First, AIXI is (provably) not
 computable, which means there is no easy shortcut to AGI. Second, universal
 intelligence is not computable because it requires testing in an infinite
 number of environments. Since there is no other well accepted test of
 intelligence above human level, it casts doubt on the main premise of the
 singularity: that if humans can create agents with greater than human
 intelligence, then so can they.

 Prediction is central to intelligence, as I argue in [3]. Legg proved in
 [4] that there is no elegant theory of prediction. Predicting all
 environments up to a given level of Kolmogorov complexity requires a
 predictor with at least the same level of complexity. Furthermore, above a
 small level of complexity, such predictors cannot be proven because of Godel
 incompleteness. Prediction must therefore be an experimental science.

 There is currently no software or mathematical model of non-evolutionary
 recursive self improvement, even for very restricted or simple definitions
 of intelligence. Without a model you don't have friendly AI; you have
 accelerated evolution with AIs competing for resources.

 References

 1. Hutter, Marcus (2003), A Gentle Introduction to The Universal
 Algorithmic Agent {AIXI},
 in Artificial General Intelligence, B. Goertzel and C. Pennachin eds.,
 Springer. http://www.idsia.ch/~marcus/ai/aixigentle.htm

 2. Legg, Shane, and Marcus Hutter (2006),
 A Formal Measure of Machine Intelligence, Proc. Annual machine
 learning conference of Belgium and The Netherlands (Benelearn-2006).
 Ghent, 2006.  http://www.vetta.org/documents/ui_benelearn.pdf

 3. http://cs.fit.edu/~mmahoney/compression/rationale.html

 4. Legg, Shane, (2006), Is There an Elegant Universal Theory of
 Prediction?,
 Technical Report IDSIA-12-06, IDSIA / USI-SUPSI,
 Dalle Molle Institute for Artificial Intelligence, Galleria 2, 6928 Manno,
 Switzerland.
 http://www.vetta.org/documents/IDSIA-12-06-1.pdf

 -- Matt Mahoney, [EMAIL PROTECTED]


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-26 Thread Mike Tintner
Valentina:In other words I'm looking for a way to mathematically define how the 
AGI will mathematically define its goals.

Holy Non-Existent Grail? Has  any new branch of logic or mathematics ever been 
logically or mathematically (axiomatically) derivable from any old one?  e.g. 
topology,  Riemannian geometry, complexity theory, fractals,  free-form 
deformation  etc etc


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-26 Thread Abram Demski
Mike,

The answer here is a yes. Many new branches of mathematics have arisen
since the formalization of set theory, but most of them can be
interpreted as special branches of set theory. Moreover,
mathematicians often find this to be actually useful, not merely a
curiosity.

--Abram Demski

On Tue, Aug 26, 2008 at 12:32 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Valentina:In other words I'm looking for a way to mathematically define how
 the AGI will mathematically define its goals.

 Holy Non-Existent Grail? Has  any new branch of logic or mathematics ever
 been logically or mathematically (axiomatically) derivable from any old
 one?  e.g. topology,  Riemannian geometry, complexity theory, fractals,
 free-form deformation  etc etc
 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-26 Thread Mike Tintner

Abram,

Thanks for reply. This is presumably after the fact -  can set theory 
predict new branches? Which branch of maths was set theory derivable from? I 
suspect that's rather like trying to derive any numeral system from a 
previous one. Or like trying to derive any programming language from a 
previous one- or any system of logical notation from a previous one.



Mike,

The answer here is a yes. Many new branches of mathematics have arisen
since the formalization of set theory, but most of them can be
interpreted as special branches of set theory. Moreover,
mathematicians often find this to be actually useful, not merely a
curiosity.

--Abram Demski

On Tue, Aug 26, 2008 at 12:32 PM, Mike Tintner [EMAIL PROTECTED] 
wrote:
Valentina:In other words I'm looking for a way to mathematically define 
how

the AGI will mathematically define its goals.

Holy Non-Existent Grail? Has  any new branch of logic or mathematics ever
been logically or mathematically (axiomatically) derivable from any old
one?  e.g. topology,  Riemannian geometry, complexity theory, fractals,
free-form deformation  etc etc

agi | Archives | Modify Your Subscription



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-26 Thread Abram Demski
Mike,

That may be the case, but I do not think it is relevant to Valentina's
point. How can we mathematically define how an AGI might
mathematically define its own goals? Well, that question assumes 3
things:

-An AGI defines its own goals
-In doing so, it phrases them in mathematical language
-It is possible to mathematically define the way in which it does this

I think you are questioning assumptions 2 and 3? If so, I do not think
that the theory needs to be able to do what you are saying it cannot:
it does not need to be able to generate new branches of mathematics
from itself before-the-fact. Rather, its ability to generate new
branches (or, in our case, goals) can and should depend on the
information coming in from the environment.

Whether such a logic really exists, though, is a different question.
Before we can choose which goals we should pick, we need some criteria
by which to judge them; but it seems like such a criteria is already a
goal. So, I could cook up any method of choosing goals that sounded
OK, and claim that it was the solution to Valentina's problem, because
Valentina's problem is not yet well-defined.

The closest thing to a solution would be to purposefully give an AGI a
complex, probabilistically-defined, and often-conflicting goal system
with many diverse types of pleasure, like humans have.

On Tue, Aug 26, 2008 at 2:36 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Abram,

 Thanks for reply. This is presumably after the fact -  can set theory
 predict new branches? Which branch of maths was set theory derivable from? I
 suspect that's rather like trying to derive any numeral system from a
 previous one. Or like trying to derive any programming language from a
 previous one- or any system of logical notation from a previous one.

 Mike,

 The answer here is a yes. Many new branches of mathematics have arisen
 since the formalization of set theory, but most of them can be
 interpreted as special branches of set theory. Moreover,
 mathematicians often find this to be actually useful, not merely a
 curiosity.

 --Abram Demski

 On Tue, Aug 26, 2008 at 12:32 PM, Mike Tintner [EMAIL PROTECTED]
 wrote:

 Valentina:In other words I'm looking for a way to mathematically define
 how
 the AGI will mathematically define its goals.

 Holy Non-Existent Grail? Has  any new branch of logic or mathematics ever
 been logically or mathematically (axiomatically) derivable from any old
 one?  e.g. topology,  Riemannian geometry, complexity theory, fractals,
 free-form deformation  etc etc
 
 agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-25 Thread Matt Mahoney
John, I have looked at your patent and various web pages. You list a lot of 
nice sounding ethical terms (honor, love, hope, peace, etc) but give no details 
on how to implement them. You have already admitted that you have no 
experimental results, haven't actually built anything, and have no other 
results such as refereed conference or journal papers describing your system. 
If I am wrong about this, please let me know.

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: John LaMuth [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, August 24, 2008 11:21:30 PM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)

 
 
- Original Message -  
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, August 24, 2008 2:46 PM
Subject: Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment)

I have challenged this list as well 
as the singularity and SL4 lists to come up with an example of a mathematical, 
software, biological, or physical example of RSI, or at least a plausible 
argument that one could be created, and nobody has. To qualify, an agent 
has to modify itself or create a more intelligent copy of itself according to 
an 
intelligence test chosen by the original. The following are not examples of 
RSI:
 
 1. Evolution of life, including humans.
 2. 
Emergence of language, culture, writing, communication technology, and 
computers.

 -- Matt Mahoney, [EMAIL PROTECTED]
 
###
*
 
Matt
 
Where have you been for the last 2 months 
??
 
I had been talking then about my 2 US Patents 
for ethical/friendly AI
along lines of a recursive 
simulation targeting language (topic 2) above.
 
This language agent employs feedback loops and LTM 
to increase comprehension and accuracy
(and BTW - resolves the ethical safeguard problems 
for AI) ...
 
No-one yet has proven me wrong ?? Howsabout YOU 
???
 
More at
www.angelfire.com/rnb/fairhaven/specs.html
 
 
John LaMuth
 
www.ethicalvalues.com 
 
 
 


 
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-25 Thread Abram Demski
Matt,

What is your opinion on Goedel machines?

http://www.idsia.ch/~juergen/goedelmachine.html

--Abram

On Sun, Aug 24, 2008 at 5:46 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Eric Burton [EMAIL PROTECTED] wrote:


These have profound impacts on AGI design. First, AIXI is (provably) not 
computable,
which means there is no easy shortcut to AGI. Second, universal intelligence 
is not
computable because it requires testing in an infinite number of 
environments. Since
there is no other well accepted test of intelligence above human level, it 
casts doubt on
the main premise of the singularity: that if humans can create agents with 
greater than
human intelligence, then so can they.

I don't know for sure that these statements logically follow from one
another.

 They don't. I cannot prove that there is no non-evolutionary model of 
 recursive self improvement (RSI). Nor can I prove that there is. But it is a 
 question we need to answer before an evolutionary model becomes technically 
 feasible, because an evolutionary model is definitely unfriendly.

Higher intelligence bootstrapping itself has already been proven on
Earth. Presumably it can happen in a simulation space as well, right?

 If you mean the evolution of humans, that is not an example of RSI. One 
 requirement of friendly AI is that an AI cannot alter its human-designed 
 goals. (Another is that we get the goals right, which is unsolved). However, 
 in an evolutionary environment, the parents do not get to choose the goals of 
 their children. Evolution chooses goals that maximize reproductive fitness, 
 regardless of what you want.

 I have challenged this list as well as the singularity and SL4 lists to come 
 up with an example of a mathematical, software, biological, or physical 
 example of RSI, or at least a plausible argument that one could be created, 
 and nobody has. To qualify, an agent has to modify itself or create a more 
 intelligent copy of itself according to an intelligence test chosen by the 
 original. The following are not examples of RSI:

 1. Evolution of life, including humans.
 2. Emergence of language, culture, writing, communication technology, and 
 computers.
 3. A chess playing (or tic-tac-toe, or factoring, or SAT solving) program 
 that makes modified copies of itself by
 randomly flipping bits in a compressed representation of its source
 code, and playing its copies in death matches.
 4. Selective breeding of children for those that get higher grades in school.
 5. Genetic engineering of humans for larger brains.

 1 fails because evolution is smarter than all of human civilization if you 
 measure intelligence in bits of memory. A model of evolution uses 10^37 bits 
 (10^10 bits of DNA per cell x 10^14 cells in the human body x 10^10 humans x 
 10^3 ratio of biomass to human mass). Human civilization has at most 10^25 
 bits (10^15 synapses in the human brain x 10^10 humans).

 2 fails because individual humans are not getting smarter with each 
 generation, at least not nearly as fast as civilization is advancing. Rather, 
 there are more humans, and we are getting better organized through 
 specialization of tasks. Human brains are not much different than they were 
 10,000 years ago.

 3 fails because there are no known classes of problems that are provably hard 
 to solve but easy to verify. Tic-tac-toe and chess have bounded complexity. 
 It has not been proven that factoring is harder than multiplication. We don't 
 know that P != NP, and even if we did, many NP-complete problems have special 
 cases that are easy to solve, and we don't know how to program the parent to 
 avoid these cases through successive generations.

 4 fails because there is no evidence that above a certain level (about IQ 
 200) that childhood intelligence correlates with adult success. The problem 
 is that adults of average intelligence can't agree on how success should be 
 measured*.

 5 fails for the same reason.

 *For example, the average person recognizes Einstein as a genius not because 
 they are
 awed by his theories of general relativity, but because other people
 have said so. If you just read his papers (without understanding their great 
 insights) and knew that he never learned to drive a car, you might conclude 
 differently.

  -- Matt Mahoney, [EMAIL PROTECTED]



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-24 Thread Matt Mahoney
Eric Burton [EMAIL PROTECTED] wrote:


These have profound impacts on AGI design. First, AIXI is (provably) not 
computable,
which means there is no easy shortcut to AGI. Second, universal intelligence 
is not
computable because it requires testing in an infinite number of environments. 
Since
there is no other well accepted test of intelligence above human level, it 
casts doubt on
the main premise of the singularity: that if humans can create agents with 
greater than
human intelligence, then so can they.

I don't know for sure that these statements logically follow from one
another.

They don't. I cannot prove that there is no non-evolutionary model of recursive 
self improvement (RSI). Nor can I prove that there is. But it is a question we 
need to answer before an evolutionary model becomes technically feasible, 
because an evolutionary model is definitely unfriendly.

Higher intelligence bootstrapping itself has already been proven on
Earth. Presumably it can happen in a simulation space as well, right?

If you mean the evolution of humans, that is not an example of RSI. One 
requirement of friendly AI is that an AI cannot alter its human-designed goals. 
(Another is that we get the goals right, which is unsolved). However, in an 
evolutionary environment, the parents do not get to choose the goals of their 
children. Evolution chooses goals that maximize reproductive fitness, 
regardless of what you want.

I have challenged this list as well as the singularity and SL4 lists to come up 
with an example of a mathematical, software, biological, or physical example of 
RSI, or at least a plausible argument that one could be created, and nobody 
has. To qualify, an agent has to modify itself or create a more intelligent 
copy of itself according to an intelligence test chosen by the original. The 
following are not examples of RSI:

1. Evolution of life, including humans.
2. Emergence of language, culture, writing, communication technology, and 
computers.
3. A chess playing (or tic-tac-toe, or factoring, or SAT solving) program that 
makes modified copies of itself by
randomly flipping bits in a compressed representation of its source
code, and playing its copies in death matches.
4. Selective breeding of children for those that get higher grades in school.
5. Genetic engineering of humans for larger brains.

1 fails because evolution is smarter than all of human civilization if you 
measure intelligence in bits of memory. A model of evolution uses 10^37 bits 
(10^10 bits of DNA per cell x 10^14 cells in the human body x 10^10 humans x 
10^3 ratio of biomass to human mass). Human civilization has at most 10^25 bits 
(10^15 synapses in the human brain x 10^10 humans).

2 fails because individual humans are not getting smarter with each generation, 
at least not nearly as fast as civilization is advancing. Rather, there are 
more humans, and we are getting better organized through specialization of 
tasks. Human brains are not much different than they were 10,000 years ago.

3 fails because there are no known classes of problems that are provably hard 
to solve but easy to verify. Tic-tac-toe and chess have bounded complexity. It 
has not been proven that factoring is harder than multiplication. We don't know 
that P != NP, and even if we did, many NP-complete problems have special cases 
that are easy to solve, and we don't know how to program the parent to avoid 
these cases through successive generations.

4 fails because there is no evidence that above a certain level (about IQ 200) 
that childhood intelligence correlates with adult success. The problem is that 
adults of average intelligence can't agree on how success should be measured*.

5 fails for the same reason.

*For example, the average person recognizes Einstein as a genius not because 
they are
awed by his theories of general relativity, but because other people
have said so. If you just read his papers (without understanding their great 
insights) and knew that he never learned to drive a car, you might conclude 
differently.

 -- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-24 Thread John LaMuth

- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, August 24, 2008 2:46 PM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)


 I have challenged this list as well as the singularity and SL4 lists to come 
up with an example of a mathematical, software, biological, or physical example 
of RSI, or at least a plausible argument that one could be created, and nobody 
has. To qualify, an agent has to modify itself or create a more intelligent 
copy of itself according to an intelligence test chosen by the original. The 
following are not examples of RSI:
 
 1. Evolution of life, including humans.
 2. Emergence of language, culture, writing, communication technology, and 
 computers.

 -- Matt Mahoney, [EMAIL PROTECTED]
 
###
*

Matt

Where have you been for the last 2 months ??

I had been talking then about my 2 US Patents for ethical/friendly AI
along lines of a recursive simulation targeting language (topic 2) above.

This language agent employs feedback loops and LTM to increase comprehension 
and accuracy
(and BTW - resolves the ethical safeguard problems for AI) ...

No-one yet has proven me wrong ?? Howsabout YOU ???

More at
www.angelfire.com/rnb/fairhaven/specs.html


John LaMuth

www.ethicalvalues.com 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-23 Thread William Pearson
2008/8/23 Matt Mahoney [EMAIL PROTECTED]:
 Valentina Poletti [EMAIL PROTECTED] wrote:
 I was wondering why no-one had brought up the information-theoretic aspect 
 of this yet.

 It has been studied. For example, Hutter proved that the optimal strategy of 
 a rational goal seeking agent in an unknown computable environment is AIXI: 
 to guess that the environment is simulated by the shortest program consistent 
 with observation so far [1].

By my understanding, I would qualify this as Hutter proved that the
*one of the* optimal strategies of a rational error-free goal seeking
agent, which has no impact on the environment beyond its explicit
output, in an unknown computable environment is AIXI: to guess that
the environment is simulated by the shortest program consistent with
observation so far


  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-23 Thread Jim Bromer
On Sat, Aug 23, 2008 at 7:00 AM, William Pearson [EMAIL PROTECTED] wrote:
 2008/8/23 Matt Mahoney [EMAIL PROTECTED]:
 Valentina Poletti [EMAIL PROTECTED] wrote:
 I was wondering why no-one had brought up the information-theoretic aspect 
 of this yet.

 It has been studied. For example, Hutter proved that the optimal strategy of 
 a rational goal seeking agent in an unknown computable environment is AIXI: 
 to guess that the environment is simulated by the shortest program 
 consistent with observation so far [1].

 By my understanding, I would qualify this as Hutter proved that the
 *one of the* optimal strategies of a rational error-free goal seeking
 agent, which has no impact on the environment beyond its explicit
 output, in an unknown computable environment is AIXI: to guess that
 the environment is simulated by the shortest program consistent with
 observation so far
  Will Pearson

I think the question of the mathematics or quasi mathematics of
algorithmic theory would be better studied using a more general
machine intelligence kind of approach.  The Hutter Solomonoff approach
of Algorithmic Information Theory looks to me like it is too narrow
and lacking a fundamental ground against which theories can be tested
but I don't know for sure because I could never find a sound basis to
use to study the theory.

I just found a Ray Solomonoff's web site and he has a couple of links
to lectures on it.
http://www.idsia.ch/~juergen/ray.html

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-23 Thread Eric Burton
These have profound impacts on AGI design. First, AIXI is (provably) not 
computable,
which means there is no easy shortcut to AGI. Second, universal intelligence 
is not
computable because it requires testing in an infinite number of environments. 
Since
there is no other well accepted test of intelligence above human level, it 
casts doubt on
the main premise of the singularity: that if humans can create agents with 
greater than
human intelligence, then so can they.

I don't know for sure that these statements logically follow from one
another. The brain probably contains a collection of kludges for
intractably hard tasks, much like wine 1.0 is probably still mostly
stubs.

Higher intelligence bootstrapping itself has already been proven on
Earth. Presumably it can happen in a simulation space as well, right?

Eric B

On 8/23/08, Jim Bromer [EMAIL PROTECTED] wrote:
 On Sat, Aug 23, 2008 at 7:00 AM, William Pearson [EMAIL PROTECTED]
 wrote:
 2008/8/23 Matt Mahoney [EMAIL PROTECTED]:
 Valentina Poletti [EMAIL PROTECTED] wrote:
 I was wondering why no-one had brought up the information-theoretic
 aspect of this yet.

 It has been studied. For example, Hutter proved that the optimal strategy
 of a rational goal seeking agent in an unknown computable environment is
 AIXI: to guess that the environment is simulated by the shortest program
 consistent with observation so far [1].

 By my understanding, I would qualify this as Hutter proved that the
 *one of the* optimal strategies of a rational error-free goal seeking
 agent, which has no impact on the environment beyond its explicit
 output, in an unknown computable environment is AIXI: to guess that
 the environment is simulated by the shortest program consistent with
 observation so far
  Will Pearson

 I think the question of the mathematics or quasi mathematics of
 algorithmic theory would be better studied using a more general
 machine intelligence kind of approach.  The Hutter Solomonoff approach
 of Algorithmic Information Theory looks to me like it is too narrow
 and lacking a fundamental ground against which theories can be tested
 but I don't know for sure because I could never find a sound basis to
 use to study the theory.

 I just found a Ray Solomonoff's web site and he has a couple of links
 to lectures on it.
 http://www.idsia.ch/~juergen/ray.html

 Jim Bromer


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com