Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Mike Tintner

  Dave Hart: MT:Sorry, I forgot to ask for what I most wanted to know - what 
form of RSI in any specific areas has been considered? 

  To quote Charles Babbage, I am not able rightly to apprehend the kind of 
confusion of ideas that could provoke such a question.

  The best we can hope for is that we participate in the construction and 
guidance of future AGIs such they they are able to, eventually, invent, perform 
and carefully guide RSI (and, of course, do so safely every single step of the 
way without exception).

  Dave,

  On the contrary, it's an important question. If an agent is to self-improve 
and keep self-improving, it has to start somewhere - in some domain of 
knowledge, or some technique/technology of problem-solving...or something. 
Maths perhaps or maths theorems.?Have you or anyone else ever thought about 
where, and how? (It sounds like the answer is, no).  RSI is for AGI a 
v.important concept - I'm just asking whether the concept has ever been 
examined with the slightest grounding in reality, or merely pursued as a 
logical conceit..

  The question is extremely important because as soon as you actually examine 
it, something v. important emerges - the systemic interconectedness of the 
whole of culture, and the whole of technology, and the whole of an individual's 
various bodies of knowledge, and you start to see why evolution of any kind in 
any area of biology or society, technology or culture is such a difficult and 
complicated business. RSI strikes me as a last-century, local-minded concept, 
not one of this century where we are becoming aware of the global 
interconnectedness and interdependence of all systems.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Ben Goertzel
About recursive self-improvement ... yes, I have thought a lot about it, but
don't have time to write a huge discourse on it here

One point is that if you have a system with N interconnected modules, you
can approach RSI by having the system separately think about how to improve
each module.  I.e. if there are modules A1, A2,..., AN ... then you can for
instance hold A1,...,A(N-1) constant while you think about how to improve
AN.  One can then iterate through all the modules and improve them in
sequence.   (Note that the modules are then doing the improving of each
other.)

What algorithms are used for the improving itself?

There is the evolutionary approach: to improve module AN, just make an
ensemble of M systems ... all of which have the same code for A1,...,A(N-1)
but different code for AN.   Then evolve this ensemble of varying artificial
minds using GP or MOSES or some such.

And then there is the probabilistic logic approach: seek rigorous
probability bounds of the odds that system goals will be better fulfilled if
AN is replaced by some candidate replacement AN'.

All this requires that the system's modules be represented in some language
that is easily comprehensible to (hence tractably modifiable by) the system
itself.  OpenCog doesn't take this approach explicitly right now, but we
know how to make it do so.  Simply make MindAgents in LISP or Combo rather
than C++.  There's no strong reason not to do this ... except that Combo is
slow right now (recently benchmarked at 1/3 the speed of Lua), and we
haven't dealt with the foreign-function interface stuff needed to plug in
LISP MindAgents (but that's probably not extremely hard).   We have done
some experiments before expressing, for instance, a simplistic PLN deduction
MindAgent in Combo.

In short the OpenCogPrime architecture explicitly supports a tractable path
to recursive self-modification.

But, notably, one would have to specifically switch this feature on --
it's not going to start doing RSI unbeknownst to us programmers.

And the problem of predicting where the trajectory of RSI will end up is a
different one ... I've been working on some theory in that regard (and will
post something on the topic w/ in the next couple weeks) but it's still
fairly speculative...

-- Ben G

On Fri, Aug 29, 2008 at 6:59 AM, Mike Tintner [EMAIL PROTECTED]wrote:



 Dave Hart: MT:Sorry, I forgot to ask for what I most wanted to know - what
 form of RSI in any specific areas has been considered?

 To quote Charles Babbage, I am not able rightly to apprehend the kind of
 confusion of ideas that could provoke such a question.

 The best we can hope for is that we participate in the construction and
 guidance of future AGIs such they they are able to, eventually, invent,
 perform and carefully guide RSI (and, of course, do so safely every single
 step of the way without exception).
 Dave,

 On the contrary, it's an important question. If an agent is to self-improve
 and keep self-improving, it has to start somewhere - in some domain of
 knowledge, or some technique/technology of problem-solving...or something.
 Maths perhaps or maths theorems.?Have you or anyone else ever thought about
 where, and how? (It sounds like the answer is, no).  RSI is for AGI a
 v.important concept - I'm just asking whether the concept has ever been
 examined with the slightest grounding in reality, or merely pursued as a
 logical conceit..

 The question is extremely important because as soon as you actually examine
 it, something v. important emerges - the systemic interconectedness of the
 whole of culture, and the whole of technology, and the whole of an
 individual's various bodies of knowledge, and you start to see why evolution
 of any kind in any area of biology or society, technology or culture is such
 a difficult and complicated business. RSI strikes me as a last-century,
 local-minded concept, not one of this century where we are becoming aware of
 the global interconnectedness and interdependence of all systems.

 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Mark Waser

Hi Terren,

Obviously you need to complicated your original statement I believe that 
ethics is *entirely* driven by what is best evolutionarily... in such a 
way that we don't derive ethics from parasites.


Saying that ethics is entirely driven by evolution is NOT the same as saying 
that evolution always results in ethics.  Ethics is 
computationally/cognitively expensive to successfully implement (because a 
stupid implementation gets exploited to death).  There are many evolutionary 
niches that won't support that expense and the successful entities in those 
niches won't be ethical.  Parasites are a prototypical/archetypal example of 
such a niche since they tend to degeneratively streamlined to the point of 
being stripped down to virtually nothing except that which is necessary for 
their parasitism.  Effectively, they are single goal entities -- the single 
most dangerous type of entity possible.



You did that by invoking social behavior - parasites are not social beings


I claim that ethics is nothing *but* social behavior.

So from there you need to identify how evolution operates in social groups 
in such a way that you can derive ethics.


OK.  How about this . . . . Ethics is that behavior that, when shown by you, 
makes me believe that I should facilitate your survival.  Obviously, it is 
then to your (evolutionary) benefit to behave ethically.


As Matt alluded to before, would you agree that ethics is the result of 
group selection? In other words, that human collectives with certain 
taboos make the group as a whole more likely to persist?


Matt is decades out of date and needs to catch up on his reading.

Ethics is *NOT* the result of group selection.  The *ethical evaluation of a 
given action* is a meme and driven by the same social/group forces as any 
other meme.  Rational memes when adopted by a group can enhance group 
survival but . . . . there are also mechanisms by which seemingly irrational 
memes can also enhance survival indirectly in *exactly* the same fashion as 
the seemingly irrational tail displays of peacocks facilitates their group 
survival by identifying the fittest individuals.  Note that it all depends 
upon circumstances . . . .


Ethics is first and foremost what society wants you to do.  But, society 
can't be too pushy in it's demands or individuals will defect and society 
will break down.  So, ethics turns into a matter of determining what is the 
behavior that is best for society (and thus the individual) without unduly 
burdening the individual (which would promote defection, cheating, etc.). 
This behavior clearly differs based upon circumstances but, equally clearly, 
should be able to be derived from a reasonably small set of rules that 
*will* be context dependent.  Marc Hauser has done a lot of research and 
human morality seems to be designed exactly that way (in terms of how it 
varies across societies as if it is based upon fairly simple rules with a 
small number of variables/variable settings.  I highly recommend his 
writings (and being familiar with them is pretty much a necessity if you 
want to have a decent advanced/current scientific discussion of ethics and 
morals).


   Mark

- Original Message - 
From: Terren Suydam [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 10:54 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))





Hi Mark,

Obviously you need to complicated your original statement I believe that 
ethics is *entirely* driven by what is best evolutionarily... in such a 
way that we don't derive ethics from parasites. You did that by invoking 
social behavior - parasites are not social beings.


So from there you need to identify how evolution operates in social groups 
in such a way that you can derive ethics. As Matt alluded to before, would 
you agree that ethics is the result of group selection? In other words, 
that human collectives with certain taboos make the group as a whole more 
likely to persist?


Terren


--- On Thu, 8/28/08, Mark Waser [EMAIL PROTECTED] wrote:


From: Mark Waser [EMAIL PROTECTED]
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI 
(was Re: [agi] The Necessity of Embodiment))

To: agi@v2.listbox.com
Date: Thursday, August 28, 2008, 9:21 PM
Parasites are very successful at surviving but they
don't have other
goals.  Try being parasitic *and* succeeding at goals other
than survival.
I think you'll find that your parasitic ways will
rapidly get in the way of
your other goals the second that you need help (or even
non-interference)
from others.

- Original Message - 
From: Terren Suydam [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 5:03 PM
Subject: Re: AGI goals (was Re: Information theoretic
approaches to AGI (was
Re: [agi] The Necessity of Embodiment))



 --- On Thu, 8/28/08, Mark Waser
[EMAIL PROTECTED] wrote:
 Actually, I *do* 

Re: [agi] How Would You Design a Play Machine?

2008-08-29 Thread Terren Suydam

--- On Fri, 8/29/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
 I don't see why an un-embodied system couldn't
 successfully use the
 concept of self in its models. It's just another
 concept, except that
 it's linked to real features of the system.

To an unembodied agent, the concept of self is indistinguishable from any other 
concept it works with. I use concept in quotes because to the unembodied 
agent, it is not a concept at all, but merely a symbol with no semantic context 
attached. All such an agent can do is perform operations on ungrounded symbols 
- at best, the result of which can appear to be intelligent within some domain 
(e.g., a chess program).

 Even though this particular
 AGI never
 heard about any of those other tools being used for cutting
 bread (and
 is not self-aware in any sense), it still can (when asked
 for advice)
 make a reasonable suggestion to try the T2
 (because of the
 similarity) = coming up with a novel idea 
 demonstrating general
 intelligence.

Sounds like magic to me. You're taking something that we humans can do and 
sticking it in as a black box into a hugely simplified agent in a way that 
imparts no understanding about how we do it.  Maybe you left that part out for 
brevity - care to elaborate?

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Terren Suydam

--- On Fri, 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
 Saying that ethics is entirely driven by evolution is NOT
 the same as saying 
 that evolution always results in ethics.  Ethics is 
 computationally/cognitively expensive to successfully
 implement (because a 
 stupid implementation gets exploited to death).  There are
 many evolutionary 
 niches that won't support that expense and the
 successful entities in those 
 niches won't be ethical.  Parasites are a
 prototypical/archetypal example of 
 such a niche since they tend to degeneratively streamlined
 to the point of 
 being stripped down to virtually nothing except that which
 is necessary for 
 their parasitism.  Effectively, they are single goal
 entities -- the single 
 most dangerous type of entity possible.

Works for me. Just wanted to point out that saying ethics is entirely driven 
by evolution is not enough to communicate with precision what you mean by that.
 
 OK.  How about this . . . . Ethics is that behavior that,
 when shown by you, 
 makes me believe that I should facilitate your survival. 
 Obviously, it is 
 then to your (evolutionary) benefit to behave ethically.

Ethics can't be explained simply by examining interactions between individuals. 
It's an emergent dynamic that requires explanation at the group level. It's a 
set of culture-wide rules and taboos - how did they get there?
 
 Matt is decades out of date and needs to catch up on his
 reading.

Really? I must be out of date too then, since I agree with his explanation of 
ethics. I haven't read Hauser yet though, so maybe you're right.
 
 Ethics is *NOT* the result of group selection.  The
 *ethical evaluation of a 
 given action* is a meme and driven by the same social/group
 forces as any 
 other meme.  Rational memes when adopted by a group can
 enhance group 
 survival but . . . . there are also mechanisms by which
 seemingly irrational 
 memes can also enhance survival indirectly in *exactly* the
 same fashion as 
 the seemingly irrational tail displays of
 peacocks facilitates their group 
 survival by identifying the fittest individuals.  Note that
 it all depends 
 upon circumstances . . . .
 
 Ethics is first and foremost what society wants you to do. 
 But, society 
 can't be too pushy in it's demands or individuals
 will defect and society 
 will break down.  So, ethics turns into a matter of
 determining what is the 
 behavior that is best for society (and thus the individual)
 without unduly 
 burdening the individual (which would promote defection,
 cheating, etc.). 
 This behavior clearly differs based upon circumstances but,
 equally clearly, 
 should be able to be derived from a reasonably small set of
 rules that 
 *will* be context dependent.  Marc Hauser has done a lot of
 research and 
 human morality seems to be designed exactly that way (in
 terms of how it 
 varies across societies as if it is based upon fairly
 simple rules with a 
 small number of variables/variable settings.  I highly
 recommend his 
 writings (and being familiar with them is pretty much a
 necessity if you 
 want to have a decent advanced/current scientific
 discussion of ethics and 
 morals).
 
 Mark

I fail to see how your above explanation is anything but an elaboration of the 
idea that ethics is due to group selection. The following statements all 
support it: 
 - memes [rational or otherwise] when adopted by a group can enhance group 
survival
 - Ethics is first and foremost what society wants you to do.
 - ethics turns into a matter of determining what is the behavior that is best 
for society

Terren


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread Eric Burton
A succesful AGI should have n methods of data-mining its experience
for knowledge, I think. If it should have n ways of generating those
methods or n sets of ways to generate ways of generating those methods
etc I don't know.

On 8/28/08, j.k. [EMAIL PROTECTED] wrote:
 On 08/28/2008 04:47 PM, Matt Mahoney wrote:
 The premise is that if humans can create agents with above human
 intelligence, then so can they. What I am questioning is whether agents at
 any intelligence level can do this. I don't believe that agents at any
 level can recognize higher intelligence, and therefore cannot test their
 creations.

 The premise is not necessary to arrive at greater than human
 intelligence. If a human can create an agent of equal intelligence, it
 will rapidly become more intelligent (in practical terms) if advances in
 computing technologies continue to occur.

 An AGI with an intelligence the equivalent of a 99.-percentile human
 might be creatable, recognizable and testable by a human (or group of
 humans) of comparable intelligence. That same AGI at some later point in
 time, doing nothing differently except running 31 million times faster,
 will accomplish one genius-year of work every second. I would argue that
 by any sensible definition of intelligence, we would have a
 greater-than-human intelligence that was not created by a being of
 lesser intelligence.




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Mark Waser

OK.  How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your survival.
Obviously, it is
then to your (evolutionary) benefit to behave ethically.


Ethics can't be explained simply by examining interactions between 
individuals. It's an emergent dynamic that requires explanation at the 
group level. It's a set of culture-wide rules and taboos - how did they 
get there?


I wasn't explaining ethics with that statement.  I was identifying how 
evolution operates in social groups in such a way that I can derive ethics 
(in direct response to your question).


Ethics is a system.  The *definition of ethical behavior* for a given group 
is an emergent dynamic that requires explanation at the group level 
because it includes what the group believes and values -- but ethics (the 
system) does not require belief history (except insofar as it affects 
current belief).  History, circumstances, and understanding what a culture 
has the rules and taboos that they have is certainly useful for deriving 
more effective rules and taboos -- but it doesn't alter the underlying 
system which is quite simple . . . . being perceived as helpful generally 
improves your survival chances, being perceived as harmful generally 
decreases your survival chances (unless you are able to overpower the 
effect).


Really? I must be out of date too then, since I agree with his explanation 
of ethics. I haven't read Hauser yet though, so maybe you're right.


The specific phrase you cited was human collectives with certain taboos 
make the group as a whole more likely to persist.  The correct term of art 
for this is group selection and it has pretty much *NOT* been supported by 
scientific evidence and has fallen out of favor.


Matt also tends to conflate a number of ideas which should be separate which 
you seem to be doing as well.  There need to be distinctions between ethical 
systems, ethical rules, cultural variables, and evaluations of ethical 
behavior within a specific cultural context (i.e. the results of the system 
given certain rules -- which at the first-level seem to be reasonably 
standard -- with certain cultural variables as input).  Hauser's work 
identifies some of the common first-level rules and how cultural variables 
affect the results of those rules (and the derivation of secondary rules). 
It's good detailed, experiment-based stuff rather than the vague hand-waving 
that you're getting from armchair philosophers.


I fail to see how your above explanation is anything but an elaboration of 
the idea that ethics is due to group selection. The following statements 
all support it:
- memes [rational or otherwise] when adopted by a group can enhance group 
survival

- Ethics is first and foremost what society wants you to do.
- ethics turns into a matter of determining what is the behavior that is 
best for society


I think we're stumbling over your use of the term group selection  and 
what you mean by ethics is due to group selection.  Yes, the group 
selects the cultural variables that affect the results of the common 
ethical rules.  But group selection as a term of art in evolution 
generally meaning that the group itself is being selected or co-evolved --  
in this case, presumably by ethics -- which is *NOT* correct by current 
scientific understanding.  The first phrase that you quoted was intended to 
point out that both good and bad memes can positively affect survival -- so 
co-evolution doesn't work.  The second phrase that you quoted deals with the 
results of the system applying common ethical rules with cultural variables. 
The third phrase that you quoted talks about determining what the best 
cultural variables (and maybe secondary rules) are for a given set of 
circumstances -- and should have been better phrased as Improving ethical 
evaluations turns into a matter of determining . . . 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Eric Burton
I remember Richard Dawkins saying that group selection is a lie. Maybe
we shoud look past it now? It seems like a problem.

On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
 OK.  How about this . . . . Ethics is that behavior that,
 when shown by you,
 makes me believe that I should facilitate your survival.
 Obviously, it is
 then to your (evolutionary) benefit to behave ethically.

 Ethics can't be explained simply by examining interactions between
 individuals. It's an emergent dynamic that requires explanation at the
 group level. It's a set of culture-wide rules and taboos - how did they
 get there?

 I wasn't explaining ethics with that statement.  I was identifying how
 evolution operates in social groups in such a way that I can derive ethics
 (in direct response to your question).

 Ethics is a system.  The *definition of ethical behavior* for a given group
 is an emergent dynamic that requires explanation at the group level
 because it includes what the group believes and values -- but ethics (the
 system) does not require belief history (except insofar as it affects
 current belief).  History, circumstances, and understanding what a culture
 has the rules and taboos that they have is certainly useful for deriving
 more effective rules and taboos -- but it doesn't alter the underlying
 system which is quite simple . . . . being perceived as helpful generally
 improves your survival chances, being perceived as harmful generally
 decreases your survival chances (unless you are able to overpower the
 effect).

 Really? I must be out of date too then, since I agree with his explanation

 of ethics. I haven't read Hauser yet though, so maybe you're right.

 The specific phrase you cited was human collectives with certain taboos
 make the group as a whole more likely to persist.  The correct term of art
 for this is group selection and it has pretty much *NOT* been supported by
 scientific evidence and has fallen out of favor.

 Matt also tends to conflate a number of ideas which should be separate which
 you seem to be doing as well.  There need to be distinctions between ethical
 systems, ethical rules, cultural variables, and evaluations of ethical
 behavior within a specific cultural context (i.e. the results of the system
 given certain rules -- which at the first-level seem to be reasonably
 standard -- with certain cultural variables as input).  Hauser's work
 identifies some of the common first-level rules and how cultural variables
 affect the results of those rules (and the derivation of secondary rules).
 It's good detailed, experiment-based stuff rather than the vague hand-waving
 that you're getting from armchair philosophers.

 I fail to see how your above explanation is anything but an elaboration of

 the idea that ethics is due to group selection. The following statements
 all support it:
 - memes [rational or otherwise] when adopted by a group can enhance group

 survival
 - Ethics is first and foremost what society wants you to do.
 - ethics turns into a matter of determining what is the behavior that is
 best for society

 I think we're stumbling over your use of the term group selection  and
 what you mean by ethics is due to group selection.  Yes, the group
 selects the cultural variables that affect the results of the common
 ethical rules.  But group selection as a term of art in evolution
 generally meaning that the group itself is being selected or co-evolved --
 in this case, presumably by ethics -- which is *NOT* correct by current
 scientific understanding.  The first phrase that you quoted was intended to
 point out that both good and bad memes can positively affect survival -- so
 co-evolution doesn't work.  The second phrase that you quoted deals with the
 results of the system applying common ethical rules with cultural variables.
 The third phrase that you quoted talks about determining what the best
 cultural variables (and maybe secondary rules) are for a given set of
 circumstances -- and should have been better phrased as Improving ethical
 evaluations turns into a matter of determining . . . 




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread Abram Demski
I like that argument.

Also, it is clear that humans can invent better algorithms to do
specialized things. Even if an AGI couldn't think up better versions
of itself, it would be able to do the equivalent of equipping itself
with fancy calculators.

--Abram

On Thu, Aug 28, 2008 at 9:04 PM, j.k. [EMAIL PROTECTED] wrote:
 On 08/28/2008 04:47 PM, Matt Mahoney wrote:

 The premise is that if humans can create agents with above human
 intelligence, then so can they. What I am questioning is whether agents at
 any intelligence level can do this. I don't believe that agents at any level
 can recognize higher intelligence, and therefore cannot test their
 creations.

 The premise is not necessary to arrive at greater than human intelligence.
 If a human can create an agent of equal intelligence, it will rapidly become
 more intelligent (in practical terms) if advances in computing technologies
 continue to occur.

 An AGI with an intelligence the equivalent of a 99.-percentile human
 might be creatable, recognizable and testable by a human (or group of
 humans) of comparable intelligence. That same AGI at some later point in
 time, doing nothing differently except running 31 million times faster, will
 accomplish one genius-year of work every second. I would argue that by any
 sensible definition of intelligence, we would have a greater-than-human
 intelligence that was not created by a being of lesser intelligence.




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Mark Waser
Group selection (as used as the term of art in evolutionary biology) does 
not seem to be experimentally supported (and there have been a lot of recent 
experiments looking for such an effect).


It would be nice if people could let the idea drop unless there is actually 
some proof for it other than it seems to make sense that . . . . 


- Original Message - 
From: Eric Burton [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, August 29, 2008 12:56 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




I remember Richard Dawkins saying that group selection is a lie. Maybe
we shoud look past it now? It seems like a problem.

On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:

OK.  How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your survival.
Obviously, it is
then to your (evolutionary) benefit to behave ethically.


Ethics can't be explained simply by examining interactions between
individuals. It's an emergent dynamic that requires explanation at the
group level. It's a set of culture-wide rules and taboos - how did they
get there?


I wasn't explaining ethics with that statement.  I was identifying how
evolution operates in social groups in such a way that I can derive 
ethics

(in direct response to your question).

Ethics is a system.  The *definition of ethical behavior* for a given 
group

is an emergent dynamic that requires explanation at the group level
because it includes what the group believes and values -- but ethics (the
system) does not require belief history (except insofar as it affects
current belief).  History, circumstances, and understanding what a 
culture

has the rules and taboos that they have is certainly useful for deriving
more effective rules and taboos -- but it doesn't alter the underlying
system which is quite simple . . . . being perceived as helpful generally
improves your survival chances, being perceived as harmful generally
decreases your survival chances (unless you are able to overpower the
effect).

Really? I must be out of date too then, since I agree with his 
explanation


of ethics. I haven't read Hauser yet though, so maybe you're right.


The specific phrase you cited was human collectives with certain taboos
make the group as a whole more likely to persist.  The correct term of 
art
for this is group selection and it has pretty much *NOT* been supported 
by

scientific evidence and has fallen out of favor.

Matt also tends to conflate a number of ideas which should be separate 
which
you seem to be doing as well.  There need to be distinctions between 
ethical

systems, ethical rules, cultural variables, and evaluations of ethical
behavior within a specific cultural context (i.e. the results of the 
system

given certain rules -- which at the first-level seem to be reasonably
standard -- with certain cultural variables as input).  Hauser's work
identifies some of the common first-level rules and how cultural 
variables
affect the results of those rules (and the derivation of secondary 
rules).
It's good detailed, experiment-based stuff rather than the vague 
hand-waving

that you're getting from armchair philosophers.

I fail to see how your above explanation is anything but an elaboration 
of


the idea that ethics is due to group selection. The following statements
all support it:
- memes [rational or otherwise] when adopted by a group can enhance 
group


survival
- Ethics is first and foremost what society wants you to do.
- ethics turns into a matter of determining what is the behavior that 
is

best for society


I think we're stumbling over your use of the term group selection  and
what you mean by ethics is due to group selection.  Yes, the group
selects the cultural variables that affect the results of the common
ethical rules.  But group selection as a term of art in evolution
generally meaning that the group itself is being selected or 
co-evolved --

in this case, presumably by ethics -- which is *NOT* correct by current
scientific understanding.  The first phrase that you quoted was intended 
to
point out that both good and bad memes can positively affect survival --  
so
co-evolution doesn't work.  The second phrase that you quoted deals with 
the
results of the system applying common ethical rules with cultural 
variables.

The third phrase that you quoted talks about determining what the best
cultural variables (and maybe secondary rules) are for a given set of
circumstances -- and should have been better phrased as Improving 
ethical

evaluations turns into a matter of determining . . . 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com





Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Charles Hixson
Dawkins tends to see an truth, and then overstate it.  What he says 
isn't usually exactly wrong, so much as one-sided.  This may be an 
exception.


Some meanings of group selection don't appear to map onto reality.  
Others map very weakly.  Some have reasonable explanatory power.  If you 
don't define with precision which meaning you are using, then you invite 
confusion.  As such, it's a term that it's better not to use.


But I wouldn't usually call it a lie.  Merely a mistake.  The exact 
nature of the mistake depend on precisely what you mean, and the context 
within which you are using it.  Often it's merely a signal that you are 
confused and don't KNOW precisely what you are talking about, but merely 
the general ball park within which you believe it lies.  Only rarely is 
it intentionally used to confuse things with malice intended.  In that 
final case the term lie is appropriate.  Otherwise it's merely 
inadvisable usage.


Eric Burton wrote:

I remember Richard Dawkins saying that group selection is a lie. Maybe
we shoud look past it now? It seems like a problem.

On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
  

OK.  How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your survival.
Obviously, it is
then to your (evolutionary) benefit to behave ethically.


Ethics can't be explained simply by examining interactions between
individuals. It's an emergent dynamic that requires explanation at the
group level. It's a set of culture-wide rules and taboos - how did they
get there?
  

I wasn't explaining ethics with that statement.  I was identifying how
evolution operates in social groups in such a way that I can derive ethics
(in direct response to your question).

Ethics is a system.  The *definition of ethical behavior* for a given group
is an emergent dynamic that requires explanation at the group level
because it includes what the group believes and values -- but ethics (the
system) does not require belief history (except insofar as it affects
current belief).  History, circumstances, and understanding what a culture
has the rules and taboos that they have is certainly useful for deriving
more effective rules and taboos -- but it doesn't alter the underlying
system which is quite simple . . . . being perceived as helpful generally
improves your survival chances, being perceived as harmful generally
decreases your survival chances (unless you are able to overpower the
effect).



Really? I must be out of date too then, since I agree with his explanation

of ethics. I haven't read Hauser yet though, so maybe you're right.
  

The specific phrase you cited was human collectives with certain taboos
make the group as a whole more likely to persist.  The correct term of art
for this is group selection and it has pretty much *NOT* been supported by
scientific evidence and has fallen out of favor.

Matt also tends to conflate a number of ideas which should be separate which
you seem to be doing as well.  There need to be distinctions between ethical
systems, ethical rules, cultural variables, and evaluations of ethical
behavior within a specific cultural context (i.e. the results of the system
given certain rules -- which at the first-level seem to be reasonably
standard -- with certain cultural variables as input).  Hauser's work
identifies some of the common first-level rules and how cultural variables
affect the results of those rules (and the derivation of secondary rules).
It's good detailed, experiment-based stuff rather than the vague hand-waving
that you're getting from armchair philosophers.



I fail to see how your above explanation is anything but an elaboration of

the idea that ethics is due to group selection. The following statements
all support it:
- memes [rational or otherwise] when adopted by a group can enhance group

survival
- Ethics is first and foremost what society wants you to do.
- ethics turns into a matter of determining what is the behavior that is
best for society
  

I think we're stumbling over your use of the term group selection  and
what you mean by ethics is due to group selection.  Yes, the group
selects the cultural variables that affect the results of the common
ethical rules.  But group selection as a term of art in evolution
generally meaning that the group itself is being selected or co-evolved --
in this case, presumably by ethics -- which is *NOT* correct by current
scientific understanding.  The first phrase that you quoted was intended to
point out that both good and bad memes can positively affect survival -- so
co-evolution doesn't work.  The second phrase that you quoted deals with the
results of the system applying common ethical rules with cultural variables.
The third phrase that you quoted talks about determining what the best
cultural variables (and maybe secondary rules) are for a given set of
circumstances -- and should have been better phrased as 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Matt Mahoney
Group selection is not dead, just weaker than individual selection. Altruism in 
many species is evidence for its existence. 
http://en.wikipedia.org/wiki/Group_selection

In any case, evolution of culture and ethics in humans is primarily memetic, 
not genetic. Taboos against nudity are nearly universal among cultures with 
language, yet unique to homo sapiens.

You might believe that certain practices are intrinsically good or bad, not the 
result of group selection. Fine. That is how your beliefs are supposed to work.

 -- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, August 29, 2008 1:13:43 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))

Group selection (as used as the term of art in evolutionary biology) does 
not seem to be experimentally supported (and there have been a lot of recent 
experiments looking for such an effect).

It would be nice if people could let the idea drop unless there is actually 
some proof for it other than it seems to make sense that . . . . 

- Original Message - 
From: Eric Burton [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, August 29, 2008 12:56 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))


I remember Richard Dawkins saying that group selection is a lie. Maybe
 we shoud look past it now? It seems like a problem.

 On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
 OK.  How about this . . . . Ethics is that behavior that,
 when shown by you,
 makes me believe that I should facilitate your survival.
 Obviously, it is
 then to your (evolutionary) benefit to behave ethically.

 Ethics can't be explained simply by examining interactions between
 individuals. It's an emergent dynamic that requires explanation at the
 group level. It's a set of culture-wide rules and taboos - how did they
 get there?

 I wasn't explaining ethics with that statement.  I was identifying how
 evolution operates in social groups in such a way that I can derive 
 ethics
 (in direct response to your question).

 Ethics is a system.  The *definition of ethical behavior* for a given 
 group
 is an emergent dynamic that requires explanation at the group level
 because it includes what the group believes and values -- but ethics (the
 system) does not require belief history (except insofar as it affects
 current belief).  History, circumstances, and understanding what a 
 culture
 has the rules and taboos that they have is certainly useful for deriving
 more effective rules and taboos -- but it doesn't alter the underlying
 system which is quite simple . . . . being perceived as helpful generally
 improves your survival chances, being perceived as harmful generally
 decreases your survival chances (unless you are able to overpower the
 effect).

 Really? I must be out of date too then, since I agree with his 
 explanation

 of ethics. I haven't read Hauser yet though, so maybe you're right.

 The specific phrase you cited was human collectives with certain taboos
 make the group as a whole more likely to persist.  The correct term of 
 art
 for this is group selection and it has pretty much *NOT* been supported 
 by
 scientific evidence and has fallen out of favor.

 Matt also tends to conflate a number of ideas which should be separate 
 which
 you seem to be doing as well.  There need to be distinctions between 
 ethical
 systems, ethical rules, cultural variables, and evaluations of ethical
 behavior within a specific cultural context (i.e. the results of the 
 system
 given certain rules -- which at the first-level seem to be reasonably
 standard -- with certain cultural variables as input).  Hauser's work
 identifies some of the common first-level rules and how cultural 
 variables
 affect the results of those rules (and the derivation of secondary 
 rules).
 It's good detailed, experiment-based stuff rather than the vague 
 hand-waving
 that you're getting from armchair philosophers.

 I fail to see how your above explanation is anything but an elaboration 
 of

 the idea that ethics is due to group selection. The following statements
 all support it:
 - memes [rational or otherwise] when adopted by a group can enhance 
 group

 survival
 - Ethics is first and foremost what society wants you to do.
 - ethics turns into a matter of determining what is the behavior that 
 is
 best for society

 I think we're stumbling over your use of the term group selection  and
 what you mean by ethics is due to group selection.  Yes, the group
 selects the cultural variables that affect the results of the common
 ethical rules.  But group selection as a term of art in evolution
 generally meaning that the group itself is being selected or 
 co-evolved --
 in this case, presumably by ethics -- which is *NOT* correct by current
 scientific understanding.  

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.

On 08/29/2008 10:09 AM, Abram Demski wrote:

I like that argument.

Also, it is clear that humans can invent better algorithms to do
specialized things. Even if an AGI couldn't think up better versions
of itself, it would be able to do the equivalent of equipping itself
with fancy calculators.

--Abram

   


Exactly. A better transistor or a lower complexity algorithm for a 
computational bottleneck in an AGI (and implementing such) is a 
self-improvement that improves the AGI's ability to make further 
improvements -- i.e., RSI.


Likewise, it is not inconceivable that we will soon be able to improve 
human intelligence by means such as increasing neural signaling speed 
(assuming the increase doesn't have too many negative effects, which it 
might) and improving other *individual* aspects of brain biology. This 
would be RSI, too.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Frame Semantics

2008-08-29 Thread Mike Tintner
Advances in Frame Semantics:
Corpus and Computational Approaches and Insights

Theme Session to be held at ICLC 11, Berkeley, CA
Date: July 28 - August 3, 2009
Organizer: Miriam R. L. Petruck

Theme Session Description:

Fillmore (1975) introduced the notion of a frame into linguistics over 
thirty years ago.  As a cognitive structuring device used in the service 
of understanding, the semantic frame, parts of which are indexed by words 
(Fillmore 1985), is at the heart of Frame Semantics.  While researchers 
have appealed to Frame Semantics to provide accounts for various lexical, 
syntactic, and semantic phenomena in a range of languages (e.g. Ostman 
2000, Petruck 1995, Lambrecht 1984), its most highly developed 
instantiation is found in FrameNet (http://framenet.icsi.berkeley.edu). An 
ongoing research project in computational lexicography, the FrameNet 
database provides for a substantial portion of the vocabulary of 
contemporary English, a body of semantically and syntactically annotated 
sentences from which reliable information can be reported on the valences 
or combinatorial possibilities of each lexical item.

FrameNet has generated great interest in the Natural Language Processing 
community, resulting in new efforts for lexicon building and computational 
semantics. Advances in technology and the availability of large corpora 
have facilitated developing FrameNet lexical resources for languages other 
than English (with Spanish, Japanese, and German the most advanced, and 
Hebrew, Italian, Slovenian and Swedish at early stages). These projects 
(necessarily) also test FrameNet???s implicit claim about representing 
conceptual structure, rather than building an application driven 
structured organization of the lexicon of contemporary English. At the 
same time, FrameNet has inspired research on automatically induced 
semantic lexicons (Green and Dorr 2004, Pado and Lapata 2005) and 
automatic semantic role labeling (ASRL), or ?semantic parsing (Gildea 
and Jurafsky 2002, Thompson et al. 2003, Fleischman and Hovy 2003, 
Litkowski 2004, Baldewein et al. 2004).  Frame Semantics has proven to be 
among the most useful techniques for deep semantic analysis of texts, thus 
contributing to research on information extraction (Mohit and Narayanan 
2003), question answering (Narayanan and Harabagiu 2004, Narayanan and 
Sinha 2005), and automatic reasoning (Scheffczyk et al. 2006, Scheffczyk 
et al., 2007).

In 1999 (at ICLC 6 in Stockholm), researchers began to address cognitive 
aspects of Frame Semantics explicitly in a public forum during a theme 
session on Construction Grammar, the sister theory of Frame Semantics. The 
goal of the 2009 theme session is to bring together researchers in 
cognitive, corpus and computational linguistics to (1) present their work 
using corpus approaches for the development of FrameNet-style lexical 
resources and FrameNet-derived representations for computational 
approaches to semantic processing and (2) share their insights about 
advances in Frame Semantics.  We are particularly interested in work that 
attends to the cognitive linguistic dimension in Frame Semantics.

Submission Procedure

Abstracts must be:
* a maximum of 500 words
* submitted in .pdf format
* received no later than the Sept 30, 2008 deadline
* sent with the title of the paper, name(s) of author(s), affiliation and
   a contact e-mail address
* sent to [EMAIL PROTECTED]

IMPORTANT: Both the theme session proposal itself and the individual 
contributions will undergo independent reviewing by the ICLC program committee.

--



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread William Pearson
2008/8/29 j.k. [EMAIL PROTECTED]:
 On 08/28/2008 04:47 PM, Matt Mahoney wrote:

 The premise is that if humans can create agents with above human
 intelligence, then so can they. What I am questioning is whether agents at
 any intelligence level can do this. I don't believe that agents at any level
 can recognize higher intelligence, and therefore cannot test their
 creations.

 The premise is not necessary to arrive at greater than human intelligence.
 If a human can create an agent of equal intelligence, it will rapidly become
 more intelligent (in practical terms) if advances in computing technologies
 continue to occur.

 An AGI with an intelligence the equivalent of a 99.-percentile human
 might be creatable, recognizable and testable by a human (or group of
 humans) of comparable intelligence. That same AGI at some later point in
 time, doing nothing differently except running 31 million times faster, will
 accomplish one genius-year of work every second.

Will it? It might be starved for lack of interaction with the world
and other intelligences, and so be a lot less productive than
something working at normal speeds.

Most learning systems aren't constrained by lack of processing power
for how long it takes them to learn things (AIXI excepted), but by the
speed of running an experiment.

  Will Pearson


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread Matt Mahoney
It seems that the debate over recursive self improvement depends on what you 
mean by improvement. If you define improvement as intelligence as defined by 
the Turing test, then RSI is not possible because the Turing test does not test 
for superhuman intelligence. If you mean improvement as more memory, faster 
clock speed, more network bandwidth, etc., then yes, I think it is reasonable 
to expect Moore's law to continue after we are all uploaded. If you mean 
improvement in the sense of competitive fitness, then yes, I expect evolution 
to continue, perhaps very rapidly if it is based on a computing substrate other 
than DNA. Whether you can call it self improvement or whether the result is 
desirable is debatable. We are, after all, pondering the extinction of Homo 
Sapiens and replacing it with some unknown species, perhaps gray goo. Will the 
nanobots look back at this as an improvement, the way we view the extinction of 
Homo Erectus?

My question is whether RSI is mathematically possible in the context of 
universal intelligence, i.e. expected reward or prediction accuracy over a 
Solomonoff distribution of computable environments. I believe it is possible 
for Turing machines if and only if they have access to true random sources so 
that each generation can create successively more complex test environments to 
evaluate their offspring. But this is troubling because in practice we can 
construct pseudo-random sources that are nearly indistinguishable from truly 
random in polynomial time (but none that are *provably* so).

 -- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI-09 - Preliminary Call for Papers

2008-08-29 Thread Bill Hibbard

The special rate at the Crowne Plaza does not
apply to the night of Monday, 9 March. If the
post-conference workshops on Monday extend
into the afternoon, it would be useful if the
special rate was available on Monday night.

Thanks,
Bill


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.

On 08/29/2008 01:29 PM, William Pearson wrote:

2008/8/29 j.k.[EMAIL PROTECTED]:
   

An AGI with an intelligence the equivalent of a 99.-percentile human
might be creatable, recognizable and testable by a human (or group of
humans) of comparable intelligence. That same AGI at some later point in
time, doing nothing differently except running 31 million times faster, will
accomplish one genius-year of work every second.
 


Will it? It might be starved for lack of interaction with the world
and other intelligences, and so be a lot less productive than
something working at normal speeds.

   


Yes, you're right. It doesn't follow that its productivity will 
necessarily scale linearly, but the larger point I was trying to make 
was that it would be much faster and that being much faster would 
represent an improvement that improves its ability to make future 
improvements.


The numbers are unimportant, but I'd argue that even if there were just 
one such human-level AGI running 1 million times normal speed and even 
if it did require regular interaction just like most humans do, that it 
would still be hugely productive and would represent a phase-shift in 
intelligence in terms of what it accomplishes. Solving one difficult 
problem is probably not highly parallelizable in general (many are not 
at all parallelizable), but solving tens of thousands of such problems 
across many domains over the course of a year or so probably is. The 
human-level AGI running a million times faster could simultaneously 
interact with tens of thousands of scientists at their pace, so there is 
no reason to believe it need be starved for interaction to the point 
that its productivity would be limited to near human levels of 
productivity.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI-09 - Preliminary Call for Papers

2008-08-29 Thread Ben Goertzel
Hi Bill,

Bruce Klein is the one dealing with this aspect of AGI-09, so I've cc'd this
message to him

To get a special rate we need to reserve a block of rooms in advance.  So
we'd need to estimate in advance the number of rooms needed for Monday
night, which will be many fewer than needed for the weekend nights I'd
imagine.  But I suppose that AI wizards that we are, we can handle that task
of statistical estimation ;-) ... last year there were about 100 folks at
AGI-08 and an impressive 60 or stayed for the post-conference workshop on
futurist-related issues.  But, my strong impression was that nearly all
conference participants will leave fairly soon after the final session of
the conference...

ben

On Fri, Aug 29, 2008 at 5:17 PM, Bill Hibbard [EMAIL PROTECTED] wrote:

 The special rate at the Crowne Plaza does not
 apply to the night of Monday, 9 March. If the
 post-conference workshops on Monday extend
 into the afternoon, it would be useful if the
 special rate was available on Monday night.

 Thanks,
 Bill


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-29 Thread Jiri Jelinek
Terren,

to the unembodied agent, it is not a concept at all, but merely a symbol with 
no semantic context attached

It's an issue when trying to learn from NL only, but you can injects
semantics (critical for grounding) when teaching through a
formal_language[-based interface], get the thinking algorithms working
and possibly focus on NL-to-formal_language conversions later.

To an unembodied agent, the concept of self is indistinguishable from any 
other concept it works with.

An AGI should be able to use tools (external/internal applications)
and it can learn to view itself (or just some of its modules) as its
tool(s).
You can design an interface [possibly just for advanced users] for
mapping learned concepts/actions to the interface of available tools.
Just like it can learn how to use a command line calculator, it can
learn how to use self as a tool. Then it can learn that an alias to
use for that tool is I/Me.
By design, it can also clearly distinguish between using a particular
tool in theory and in practice.

 All such an agent can do is perform operations on ungrounded symbols - at 
 best, the result of which can appear to be intelligent within some domain 
 (e.g., a chess program).

You can ground when using semantic-supporting input formats. I don't
see why would it have to be specific to a single domain. You can use
very general data representation structures and fill it with data with
many domains. You just have to get the KR right (unlike CYC). Easy
to say, I know, but I don't see a good reason why it couldn't (in
principle) work and I'm working on figuring that out.

 Even though this particular
 AGI never
 heard about any of those other tools being used for cutting
 bread (and
 is not self-aware in any sense), it still can (when asked
 for advice)
 make a reasonable suggestion to try the T2
 (because of the
 similarity) = coming up with a novel idea 
 demonstrating general
 intelligence.

 Sounds like magic to me. You're taking something that we humans can do and 
 sticking it in as a black box into a hugely simplified agent in a way that 
 imparts no understanding about how we do it.  Maybe you left that part out 
 for brevity - care to elaborate?

It must sound like magic when assuming the no semantic context
attached, but that doesn't have to be the case. With right teaching
methods, the system gets semantics, can make models and can apply
knowledge learned from scenario1 to scenario2 in unique ways. What
does the right teaching methods mean? For example, when learning an
action concept (e.g. buy), it needs to grasp [at least some] roles
involved (e.g. seller, buyer, goods, price, ..) and how
relationships between the role-players changes in relevant stages. You
can design user friendly interface for teaching systems in meaningful
ways so it can later think using queriable models and understand
relationships [changes] between concepts etc... Sorry about the
brevity (busy schedule).

Regards,
Jiri Jelinek

PS: we might be slightly off-topic in this thread.. (?)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread William Pearson
2008/8/29 j.k. [EMAIL PROTECTED]:
 On 08/29/2008 01:29 PM, William Pearson wrote:

 2008/8/29 j.k.[EMAIL PROTECTED]:


 An AGI with an intelligence the equivalent of a 99.-percentile human
 might be creatable, recognizable and testable by a human (or group of
 humans) of comparable intelligence. That same AGI at some later point in
 time, doing nothing differently except running 31 million times faster,
 will
 accomplish one genius-year of work every second.


 Will it? It might be starved for lack of interaction with the world
 and other intelligences, and so be a lot less productive than
 something working at normal speeds.



 Yes, you're right. It doesn't follow that its productivity will necessarily
 scale linearly, but the larger point I was trying to make was that it would
 be much faster and that being much faster would represent an improvement
 that improves its ability to make future improvements.

 The numbers are unimportant, but I'd argue that even if there were just one
 such human-level AGI running 1 million times normal speed and even if it did
 require regular interaction just like most humans do, that it would still be
 hugely productive and would represent a phase-shift in intelligence in terms
 of what it accomplishes. Solving one difficult problem is probably not
 highly parallelizable in general (many are not at all parallelizable), but
 solving tens of thousands of such problems across many domains over the
 course of a year or so probably is. The human-level AGI running a million
 times faster could simultaneously interact with tens of thousands of
 scientists at their pace, so there is no reason to believe it need be
 starved for interaction to the point that its productivity would be limited
 to near human levels of productivity.

Only if it had millions of times normal human storage capacity and
memory bandwidth, else it couldn't keep track of all the
conversations, and sufficient bandwidth for ten thousand VOIP calls at
once.

We should perhaps clarify what you mean by speed here? The speed of
the transistor is not all of what makes a system useful. It is worth
noting that processor speed hasn't gone up appreciably from the heady
days of Pentium 4s with 3.8 GHZ in 2005.

Improvements have come from other directions (better memory bandwidth,
better pipelines and multi cores). The hard disk is probably what is
holding back current computers at the moment.


  Will Pearson





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Mike Tintner
Ben,

It looks like what you've thought about is aspects of the information 
processing side of RSI but not the knowledge side. IOW you have thought about 
the technical side but not abouthow you progress from one domain of knowledge 
about the world to another, or from one subdomain to another. That's  the 
problem of general intelligence which, remember, is all about crossing domains.

The world ( knowledge about the world) are not homoarchic but heterarchic. The 
fact that you know about physics doesn't mean you can automatically learn about 
chemistry and then about biology. Each substantive and knowledge domain has its 
own rules and character. This is what emergence and evolution refer to. Even 
each branch/subdomain of maths and logic (and most domains)  has its own rules 
and character.

And all these different domains have not only to be learned to some extent 
separately and distinctively, but integrated with each other. Hence it is that 
science is shot through with paradigms, as we try to integrate new unfamiliar 
domains with old familiar ones. And those paradigms, like the solar system for 
atomic physics, involve analogy and metaphor. This, to repeat, is the central 
problem of GI which can be defined as creative generalization - which no one in 
AGI has yet offered, (or, let's be honest, has), an idea how to solve. 

Clearly, integrating new domains is a complicated and creative and not simply a 
mathematical or recursive business. Hence it is in part that people are so 
resistant to learning new domains. You may have noticed that AGI-ers are 
staggeringly resistant to learning new domains. They only want to learn certain 
kinds of representation and not others - principally maths/logic/language  
programming - despite the fact that human culture offers scores of other 
kinds.,. They only deal with certain kinds of problems, (related to the 
previous domains), despite the fact that culture and human life  include a vast 
diversity of other problems. In this, they are fairly typical of the human race 
- everyone has resistance to learning new domains, just as organizations have 
strong resistance to joining up with other  kinds of organizations. (But 
AGI-ers who are supposed to believe in *General* Intelligence should be at 
least aware and ashamed of their narrowness).

Before you can talk about RSI, you really have to understand these problems of 
crossing and integrating domains (and why people are so resistant - they're not 
just being stupid or prejudiced). And you have to have a global picture of both 
the world of knowledge and the world-to-be-known.  Nobody in AGI does.

If RSI were possible, then you should see some signs of it within human 
society, of humans recursively self-improving - at however small a scale. You 
don't because of this problem of crossing and integrating domains. It can all 
be done, but laboriously and stumblingly not in some simple, formulaic way. 
That is culturally a very naive idea.

Even within your own sphere of information technology, I am confident that RSI, 
even if it were for argument's sake possible, would present massive problems of 
having to develop new kinds of software, machine  organization to cope with 
the information and hierarchical explosion  - and still interface with other 
existing and continuously changing technologies .



  Ben:About recursive self-improvement ... yes, I have thought a lot about it, 
but don't have time to write a huge discourse on it here

  One point is that if you have a system with N interconnected modules, you can 
approach RSI by having the system separately think about how to improve each 
module.  I.e. if there are modules A1, A2,..., AN ... then you can for instance 
hold A1,...,A(N-1) constant while you think about how to improve AN.  One can 
then iterate through all the modules and improve them in sequence.   (Note that 
the modules are then doing the improving of each other.)

  What algorithms are used for the improving itself?

  There is the evolutionary approach: to improve module AN, just make an 
ensemble of M systems ... all of which have the same code for A1,...,A(N-1) but 
different code for AN.   Then evolve this ensemble of varying artificial minds 
using GP or MOSES or some such.

  And then there is the probabilistic logic approach: seek rigorous probability 
bounds of the odds that system goals will be better fulfilled if AN is replaced 
by some candidate replacement AN'.

  All this requires that the system's modules be represented in some language 
that is easily comprehensible to (hence tractably modifiable by) the system 
itself.  OpenCog doesn't take this approach explicitly right now, but we know 
how to make it do so.  Simply make MindAgents in LISP or Combo rather than C++. 
 There's no strong reason not to do this ... except that Combo is slow right 
now (recently benchmarked at 1/3 the speed of Lua), and we haven't dealt with 
the foreign-function interface stuff needed to 

Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Ben Goertzel

On Fri, Aug 29, 2008 at 6:53 PM, Mike Tintner [EMAIL PROTECTED]wrote:
Ben,

It looks like what you've thought about is aspects of the information
processing side of RSI but not the knowledge side. IOW you have thought
about the technical side but not abouthow you progress from one domain of
knowledge about the world to another, or from one subdomain to
another. That's  the problem of general intelligence which, remember, is all
about crossing domains.


Hmmm... it is odd that you make judgments regarding what I have or have not
*thought* about, based on what I choose to write in a brief email.  My goal
in writing emails on this list is not to completely disburse myself of all
my relevant thoughts ... if I did that, I would not have time to do anything
all day but write emails to this list ;-) ... and of course I still would
fail ... these are complex matters and there's a lot to say...

yes, in that email i described the formal process of RSI and not the general
world-knowledge that an AGI will need in order to effectively perform RSI.

Before rewriting its own code substantially, an AGI will need to get a lot
of practice writing simpler code carrying out a variety of tasks in a
variety of contexts related to the system's own behavior...

But this should naturally happen.  For instance if an AGI needs to learn new
inference control heuristics and inference formulas, that is a sort of
preliminary step to learning new inference algorithms ... which is a
preliminary step to learning new kinds of cognition ... etc.   One can
articulate a series of steps toward progressively greater and greater
self-modification ...

But yes, each of these steps will require diverse knowledge ... but the
gaining of this knowledge is mostly not about RSI particularly, but rather
just part of one's overall AGI architecture ... intelligence as you say
being all about knowledge gathering, maintenance, creation and enaction



 Before you can talk about RSI, you really have to understand these problems
 of crossing and integrating domains (and why people are so resistant -
 they're not just being stupid or prejudiced). And you have to have a global
 picture of both the world of knowledge and the world-to-be-known.  Nobody in
 AGI does.


I think I do.  You think I don't.  Oh well.



 If RSI were possible, then you should see some signs of it within human
 society, of humans recursively self-improving - at however small a scale.
 You don't because of this problem of crossing and integrating domains. It
 can all be done, but laboriously and stumblingly not in some simple,
 formulaic way. That is culturally a very naive idea.


Similarly, if space travel were possible, humans would be flying around
unaided by technology from planet to planet, and star to star ;-p

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Matt Mahoney
Mike Tintner wrote:
You may have noticed that AGI-ers are staggeringly resistant to learning new 
domains. 

Remember you are dealing with human brains. You can only write into long term 
memory at a rate of 2 bits per second. :-)

AGI spans just about every field of science, from ethics to quantum mechanics, 
child development to algorithmic information theory, genetics to economics.

 -- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.

On 08/29/2008 03:14 PM, William Pearson wrote:

2008/8/29 j.k.[EMAIL PROTECTED]:

... The human-level AGI running a million
times faster could simultaneously interact with tens of thousands of
scientists at their pace, so there is no reason to believe it need be
starved for interaction to the point that its productivity would be limited
to near human levels of productivity.

 

Only if it had millions of times normal human storage capacity and
memory bandwidth, else it couldn't keep track of all the
conversations, and sufficient bandwidth for ten thousand VOIP calls at
once.
   
And sufficient electricity, etc. There are many other details that would 
have to be spelled out if we were trying to give an exhaustive list of 
every possible requirement. But the point remains that *if* the 
technological advances that we expect to occur actually do occur, then 
there will be greater-than-human intelligence that was created by 
human-level intelligence -- unless one thinks that memory capacity, chip 
design and throughput, disk, system, and network bandwidth, etc., are 
close to as good as they'll ever get. On the contrary, there are more 
promising new technologies on the horizon than one can keep track of 
(not to mention current technologies that can still be improved), which 
makes it extremely unlikely that any of these or the other relevant 
factors are close to practical maximums.

We should perhaps clarify what you mean by speed here? The speed of
the transistor is not all of what makes a system useful. It is worth
noting that processor speed hasn't gone up appreciably from the heady
days of Pentium 4s with 3.8 GHZ in 2005.

Improvements have come from other directions (better memory bandwidth,
better pipelines and multi cores).
I didn't believe that we could drop a 3 THz chip (if that were 
physically possible) onto an existing motherboard and it would scale 
linearly or that a better transistor would be the *only* improvement 
that occurs. When I said 31 million times faster, I meant the system 
as a whole would be 31 million times faster at achieving its 
computational goals. This will obviously require many improvements in 
processor design, system architecture, memory, bandwidth, physics  
materials sciences, and others, but the scenario I was trying to discuss 
was one in which these sorts of things will have occurred.


This is getting quite far off topic from the point I was trying to make 
originally, so I'll bow out of this discussion now.


j.k.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Mike Tintner


Matt: AGI spans just about every field of science, from ethics to quantum 
mechanics, child development to algorithmic information theory, genetics to 
economics.


Just so. And every field of the arts. And history. And philosophy. And 
technology. Including social technology. And organizational technology. And 
personal technology. And the physical technologies of sport, dance, sex 
etc. The whole of culture and the world.


No, nobody can be a superDa Vinci knowing everything and solving every 
problem. But actually every AGI-er will have personal experience of solving 
problems in many different domains as well as their professional ones. And 
they should, I suggest, be able to use and integrate that experience into 
AGI. They should be able to metacognitively relate, say, the problem of 
tidying and organizing a room, to the problem of organizing an argument in 
an essay, to the problem of creating an AGI organization, to the problem of 
organizing an investment portfolio, to the problem of organizing a soccer 
team  - because that is the business and problem of AGI. Crossing and 
integrating domains. Any and all domains. There should be a truly general 
culture. What I see is actually a narrow culture, (even if AGI-ers are much 
more broadly educated than most),  that only discusses a very limited set of 
problems, which are, in the final analysis, hard to distinguish from those 
of narrow AI - and a culture which refuses to consider any problems outside 
its intellectual/ professional comfort zone,





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com