RE: [agi] Pattern extrapolation as a method requiring limited intelligence

2008-05-23 Thread John G. Rose
The environmental complexities are different. NYC has been there for
hundreds of years. Human brain has been in nature for hundreds of thousands
of years. A manmade environment for AGI is custom made in the beginning; we
don't just throw it out on the street or into the jungle. It can start off
in a featureless box, with an Ethernet cable going in that is.

 

From: Derek Zahn [mailto:[EMAIL PROTECTED] 



John Rose:
 
 Which actual world, a natural or manmade?
 
Both, at least up to the present day.  In my opinion (though I know from
your previous post that you don't agree), I don't see a huge difference in
the environmental complexity of the land on which New York City sits now vs
1000 years ago.  I did not grow up, nor do I live, in a mostly featureless
box.
 
I do agree with your more general point that SOME of the brain's
functionality does not have to be duplicated in silicon to achieve AGI.
Whether it is a significant fraction, and whether it would need to be
replaced with some other functionality, seems like a hard question to me.
 




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Pattern extrapolation as a method requiring limited intelligence

2008-05-23 Thread John G. Rose
 From: Mike Tintner [mailto:[EMAIL PROTECTED]
 
 John:The synchronous melodies of the crickets strumming their legs,
 changes
 harmony as the wind moves warmthness. The reeds vibrate; the birds,
 fearing
 the snake, break their rhythmic falsetto polyphonies and flutter away to
 new
 pastures.
 
 But with humans,  pattern-breaking and the seeking of novelty are
 fundamental and deliberate, not, as per your examples, purely reactive -
 and
 what make a true AGI.
 

Yes Mike pattern breaking and seeking of novelty in humans, where does that
come from? Did it just magically appear? And true AGI may be something far
less sophisticated than a human but far more flexible and scalable.

John





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-23 Thread Richard Loosemore

Kaj Sotala wrote:

Richard,

again, I must sincerely apologize for responding to this so 
horrendously late. It's a dreadful bad habit of mine: I get an e-mail
 (or blog comment, or forum message, or whatever) that requires some 
thought before I respond, so I don't answer it right away... and then

 something related to my studies or hobbies shows up and doesn't
leave me with enough energy to compose responses to anybody at all,
after which enough time has passed that the message has vanished from
my active memory, and when I remember it so much time has passed
already that a day or two more before I answer won't make any
difference... and then *so* much time has passed that replying to the
message so late feels more embarassing than just quietly forgetting
about it.

I'll try to better my ways in the future. On the same token, I must 
say I can only admire your ability to compose long, well-written 
replies to messages in what seem to be blinks of an eye to me. :-)


Hey, no problem . you'll notice that I am pretty late getting back
this time :-) . got too many things to keep up with here.

In the spirit of our attempt to create the longest-indented discussion
in the universe, I have left all the original text in and inserted my
responses appropriately...



On 3/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:

Kaj Sotala wrote:


On 3/3/08, Richard Loosemore [EMAIL PROTECTED] wrote:


Kaj Sotala wrote:

Alright. But previously, you said that Omohundro's paper,
which to me seemed to be a general analysis of the behavior
of *any* minds with (more or less) explict goals, looked like
it was based on a 'goal-stack' motivation system. (I believe
this has also been the basis of your critique for e.g. some
SIAI articles about friendliness.) If built-in goals *can* be
constructed into motivational system AGIs, then why do you
seem to assume that AGIs with built-in goals are goal-stack
ones?



I seem to have caused lots of confusion earlier on in the
discussion, so let me backtrack and try to summarize the
structure of my argument.

1)  Conventional AI does not have a concept of a

Motivational-Emotional

System (MES), the way that I use that term, so when I
criticised Omuhundro's paper for referring only to a Goal
Stack control system,

I

was really saying no more than that he was assuming that the AI
was driven by the system that all conventional AIs are supposed
to have. These two ways of controlling an AI are two radically
different

designs.

[...]


So now:  does that clarify the specific question you asked
above?


Yes and no. :-) My main question is with part 1 of your argument
- you are saying that Omohundro's paper assumed the AI to have a
certain sort of control system. This is the part which confuses
me, since I didn't see the paper to make *any* mentions of how
the AI should be built. It only assumes that the AI has some sort
of goals, and nothing more.

[...]

Drive 1: AIs will want to self-improve This one seems fairly
straightforward: indeed, for humans self-improvement seems to be
an essential part in achieving pretty much *any* goal you are not
immeaditly capable of achieving. If you don't know how to do
something needed to achieve your goal, you practice, and when you
practice, you're improving yourself. Likewise, improving yourself
will quickly become a subgoal for *any* major goals.


But now I ask:  what exactly does this mean?

In the context of a Goal Stack system, this would be represented by
a top level goal that was stated in the knowledge representation
language of the AGI, so it would say Improve Thyself.

[...]

The reason that I say Omuhundro is assuming a Goal Stack system is
that I believe he would argue that that is what he meant, and that
he assumed that a GS architecture would allow the AI to exhibit
behavior that corresponds to what we, as humans, recognize as
wanting to self-improve.  I think it is a hidden assumption in what
he wrote.


At least I didn't read the paper in such a way - after all, the 
abstract says that it's supposed to apply equally to all AGI systems,

 regardless of the exact design:

We identify a number of drives that will appear in sufficiently 
advanced AI systems of any design. We call them drives because they 
are tendencies which will be present unless explicitly counteracted.



(You could, of course, suppose that the author was assuming that an 
AGI could *only* be built around a Goal Stack system, and therefore 
any design would mean any GS design... but that seems a bit 
far-fetched.)


Oh, I don't think that would be far-fetched, because most AI people have
not even begun to think about how to control an AI/AGI system, so they
always just go for the default.  And the default is a goal-stack system.

I have not yet published my work on MES systems, so Omuhundro would
probably not know of that.

I did notice his claim that his 'drives' are completely general, and I
found that amusing, because it does not cover the cases that I 

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-23 Thread Mark Waser

he makes a direct reference to goal driven systems, but even more
important he declares that these bad behaviors will *not* be the result
of us programming the behaviors in at the start  but in an MES
system nothing at all will happen unless the designer makes an explicit
decision to put some motivations into the system, so I can be pretty
sure that he has not considered that type of motivational system when he
makes these comments.


Richard, I think that you are incorrect here.

When Omohundro says that the bad behaviors will *not* be the result of us 
programming the behaviors in at the start, what he means is that the very 
fact of having goals or motivations and being self-improving will naturally 
lead (**regardless of architecture**) to certain (what I call generic) 
sub-goals (like the acquisition of power/money, self-preservation, etc.) and 
that the fulfillment of those subgoals, without other considerations (like 
ethics or common-sense), will result in what we would consider bad behavior.


I believe that he is correct in that goals or motivations and 
self-improvement will lead to generic subgoals regardless of architecture. 
Do you believe that your MES will not derive generic subgoals under 
self-improvement?


Omohundro's arguments aren't *meant* to apply to an MES system without 
motivations -- because such a system can't be considered to have goals.  His 
arguments will start to apply as soon as the MES system does have 
motivations/goals.  (Though, I hasten to add that I believe that his logical 
reasoning is flawed in that there are some drives that he missed that will 
prevent such bad behavior in any sufficiently advanced system).




- Original Message - 
From: Richard Loosemore [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, May 23, 2008 2:13 PM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity 
Outcomes...]




Kaj Sotala wrote:

Richard,

again, I must sincerely apologize for responding to this so horrendously 
late. It's a dreadful bad habit of mine: I get an e-mail
 (or blog comment, or forum message, or whatever) that requires some 
thought before I respond, so I don't answer it right away... and then

 something related to my studies or hobbies shows up and doesn't
leave me with enough energy to compose responses to anybody at all,
after which enough time has passed that the message has vanished from
my active memory, and when I remember it so much time has passed
already that a day or two more before I answer won't make any
difference... and then *so* much time has passed that replying to the
message so late feels more embarassing than just quietly forgetting
about it.

I'll try to better my ways in the future. On the same token, I must say I 
can only admire your ability to compose long, well-written replies to 
messages in what seem to be blinks of an eye to me. :-)


Hey, no problem . you'll notice that I am pretty late getting back
this time :-) . got too many things to keep up with here.

In the spirit of our attempt to create the longest-indented discussion
in the universe, I have left all the original text in and inserted my
responses appropriately...



On 3/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:

Kaj Sotala wrote:


On 3/3/08, Richard Loosemore [EMAIL PROTECTED] wrote:


Kaj Sotala wrote:

Alright. But previously, you said that Omohundro's paper,
which to me seemed to be a general analysis of the behavior
of *any* minds with (more or less) explict goals, looked like
it was based on a 'goal-stack' motivation system. (I believe
this has also been the basis of your critique for e.g. some
SIAI articles about friendliness.) If built-in goals *can* be
constructed into motivational system AGIs, then why do you
seem to assume that AGIs with built-in goals are goal-stack
ones?



I seem to have caused lots of confusion earlier on in the
discussion, so let me backtrack and try to summarize the
structure of my argument.

1)  Conventional AI does not have a concept of a

Motivational-Emotional

System (MES), the way that I use that term, so when I
criticised Omuhundro's paper for referring only to a Goal
Stack control system,

I

was really saying no more than that he was assuming that the AI
was driven by the system that all conventional AIs are supposed
to have. These two ways of controlling an AI are two radically
different

designs.

[...]


So now:  does that clarify the specific question you asked
above?


Yes and no. :-) My main question is with part 1 of your argument
- you are saying that Omohundro's paper assumed the AI to have a
certain sort of control system. This is the part which confuses
me, since I didn't see the paper to make *any* mentions of how
the AI should be built. It only assumes that the AI has some sort
of goals, and nothing more.

[...]

Drive 1: AIs will want to self-improve This one seems fairly
straightforward: indeed, for humans self-improvement seems to be
an essential part in 

[agi] Language Comprehension: Archival Memory or ...

2008-05-23 Thread Mike Tintner

Preparation for Situated Action

http://psychology.emory.edu/cognition/barsalou/papers/Barsalou_DP_1999_situated_comprehension.pdf

This is what Stephen and I were discussing a while back - but it neatly 
names the alternative approaches to language. Most AGI language 
comprehension treats it as if it's all about archival memory - and so has 
most cognitive linguistics until recently. Treat it aspreparation for 
situated action, which is what it has to be, first and foremost, and you 
start to realise that imaginative simulation of language is a necessity for 
understanding.


When you treat language as if it's all archival:
John kicked Jim.
Big countries like kicking small countries

you can get away v. temporarily with the delusion that comprehension need 
not involve simulation, since the detailed specifics of scenes and actions 
may not be important. You need only know v.generally that some such things 
can happen.


When you treat language as for action -

Go and kick John [in the next room]
Fancy kicking the ball around?

the delusion soon becomes apparent - along with the total impossibility of 
purely symbolic/verbal processing. You have to imaginatively simulate the 
action in a given environment to see if it is viable (although this may 
well, of course, all be unconscious). 





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Language Comprehension: Archival Memory or ...

2008-05-23 Thread Mark Waser

Several comments . . . .

First, this work is hideously outdated.  The author cites his own reading 
for some chapters he produced in 1992.


His claim that the dominant paradigms for studying language comprehension 
imply that it is an archival process is *at best* hideously outdated -- if 
indeed it was *ever* true (arguably, it is not).


Second, look at the names he quotes -- Glenberg and Robertson or Roth.  Are 
these names that are currently recognized and touted in the field of 
language comprehension?  Emphatically NOT!


POINT ONE - Please get yourself current before you attempt to argue 
anything.  You should also assume that anything that hasn't caught on in 15+ 
years probably has not caught on for a reason.


Third - your personal insistence on a linkage between imaginative and images 
is not supported anywhere.  We all agree that imaginative models/simulations 
are necessary.  The vast majority of us disagree that the perceptions for 
those models are necessarily visual.


POINT TWO - The fact that you can't recognize that this paper does NOT 
support your fanciful point indicates that you *really* do not have a handle 
on all this.  Yours is simple unadulterated bigotry.  You only see what you 
already believe to be true and cannot even recognize when what you're 
seeing/looking for is not there.


Give me some current references that support your point -- someone like 
Bloom, Chomsky, Pinker, Tomasello, Goldberg, or Jackendoff (you *do* 
recognize those names, don't you).


POINT THREE - Insisting IS TOO, IS TOO, IS TOO with obsolete resources 
(that you are misunderstanding anyways) is not going to convince anyone.



PLEASE, stop being a bigoted troll.  Read something from *this* century and 
try to find a clue.


- Original Message - 
From: Mike Tintner [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, May 23, 2008 4:58 PM
Subject: [agi] Language Comprehension: Archival Memory or ...



Preparation for Situated Action

http://psychology.emory.edu/cognition/barsalou/papers/Barsalou_DP_1999_situated_comprehension.pdf

This is what Stephen and I were discussing a while back - but it neatly 
names the alternative approaches to language. Most AGI language 
comprehension treats it as if it's all about archival memory - and so 
has most cognitive linguistics until recently. Treat it aspreparation for 
situated action, which is what it has to be, first and foremost, and you 
start to realise that imaginative simulation of language is a necessity 
for understanding.


When you treat language as if it's all archival:
John kicked Jim.
Big countries like kicking small countries

you can get away v. temporarily with the delusion that comprehension need 
not involve simulation, since the detailed specifics of scenes and actions 
may not be important. You need only know v.generally that some such things 
can happen.


When you treat language as for action -

Go and kick John [in the next room]
Fancy kicking the ball around?

the delusion soon becomes apparent - along with the total impossibility of 
purely symbolic/verbal processing. You have to imaginatively simulate the 
action in a given environment to see if it is viable (although this may 
well, of course, all be unconscious).




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] More Info Please

2008-05-23 Thread Mike Tintner

... on this:

http://www.adaptiveai.com/news/index.htm

  Towards Commercialization

It's been a while. We've been busy. A good kind of busy.

At the end of March we completed an important milestone: a demo system 
consolidating our prior 10 months' work. This was followed by my annual 
pilgrimage to our investors in Australia. The upshot of all this is that we 
now have some additional seed funding to launch our commercialization phase 
late this year.


On the technical side we still have a lot of hard work ahead of us. 
Fortunately we have a very strong and highly motivated team, so that over 
the next 6 months we expect to make as much additional progress as we have 
over the past 12. Our next technical milestone is around early October by 
which time we'll want our 'proto AGI' to be pretty much ready to start 
earning a living.


By the end of 2008 we should be ready to actively pursue commercialization 
in addition to our ongoing RD efforts. At that time we'll be looking for a 
high-powered CEO to head up our business division which we expect to grow to 
many hundreds of employees over a few years.


Early in 2009 we plan to raise capital for this commercial venture, and if 
things go according to plan we'll have a team of around 50 by the middle of 
the year.


Well, exciting future plans, but now back to work.

Peter  





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] More Info Please

2008-05-23 Thread Ben Goertzel
Peter has some technical info on his overall (adaptive neural net)
based approach to AI, on his company website, which is based on a
paper he wrote in the AGI volume Cassio and I edited for Springer
(written 2002, published 2006).

However, he has kept his specific commercial product direction tightly
under wraps.

I believe Peter's ideas are interesting but I have my doubts that his
approach is really AGI-capable.  However, I don't feel comfortable
going into great deal on my reasons, because Peter seems to value
secrecy regarding his approach... I've had a mild amount of insider
info regarding the approach (e.g. due to visiting his site a few years
ago, etc.) and don't want to blab stuff on this list that he'd want me
to keep secret...

Ben


On Fri, May 23, 2008 at 5:40 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 ... on this:

 http://www.adaptiveai.com/news/index.htm

   Towards Commercialization

 It's been a while. We've been busy. A good kind of busy.

 At the end of March we completed an important milestone: a demo system
 consolidating our prior 10 months' work. This was followed by my annual
 pilgrimage to our investors in Australia. The upshot of all this is that we
 now have some additional seed funding to launch our commercialization phase
 late this year.

 On the technical side we still have a lot of hard work ahead of us.
 Fortunately we have a very strong and highly motivated team, so that over
 the next 6 months we expect to make as much additional progress as we have
 over the past 12. Our next technical milestone is around early October by
 which time we'll want our 'proto AGI' to be pretty much ready to start
 earning a living.

 By the end of 2008 we should be ready to actively pursue commercialization
 in addition to our ongoing RD efforts. At that time we'll be looking for a
 high-powered CEO to head up our business division which we expect to grow to
 many hundreds of employees over a few years.

 Early in 2009 we plan to raise capital for this commercial venture, and if
 things go according to plan we'll have a team of around 50 by the middle of
 the year.

 Well, exciting future plans, but now back to work.

 Peter 



 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] More Info Please

2008-05-23 Thread Peter Voss
Thanks, Ben.

The technical details of our design and business plan details are indeed
confidential. All I can really say publicly is that we are confident that we
have pretty direct path to high-level AGI from where we are, and that we
have an extremely viable business plan to make this happen. Initial
commercialization next year will utilize the current 'low-grade' version of
our AGI engine that will be able to perform certain tasks that are quite
dumb (in human terms) but still commercially valuable. Our AGI 'brain' can
potentially be utilized in many different kinds of systems/ applications.

More details will probably become available late this year.

Peter

PS. I also have *some* doubts about the ultimate capabilities of our AGI
engine, but probably no greater than yours about NM  :)


-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Friday, May 23, 2008 2:56 PM
To: agi@v2.listbox.com
Subject: Re: [agi] More Info Please

Peter has some technical info on his overall (adaptive neural net) based
approach to AI, on his company website, which is based on a paper he wrote
in the AGI volume Cassio and I edited for Springer (written 2002, published
2006).

However, he has kept his specific commercial product direction tightly under
wraps.

I believe Peter's ideas are interesting but I have my doubts that his
approach is really AGI-capable.  However, I don't feel comfortable going
into great deal on my reasons, because Peter seems to value secrecy
regarding his approach... I've had a mild amount of insider info regarding
the approach (e.g. due to visiting his site a few years ago, etc.) and don't
want to blab stuff on this list that he'd want me to keep secret...

Ben


On Fri, May 23, 2008 at 5:40 PM, Mike Tintner [EMAIL PROTECTED]
wrote:
 ... on this:

 http://www.adaptiveai.com/news/index.htm

   Towards Commercialization

 It's been a while. We've been busy. A good kind of busy.

 At the end of March we completed an important milestone: a demo system 
 consolidating our prior 10 months' work. This was followed by my 
 annual pilgrimage to our investors in Australia. The upshot of all 
 this is that we now have some additional seed funding to launch our 
 commercialization phase late this year.

 On the technical side we still have a lot of hard work ahead of us.
 Fortunately we have a very strong and highly motivated team, so that 
 over the next 6 months we expect to make as much additional progress 
 as we have over the past 12. Our next technical milestone is around 
 early October by which time we'll want our 'proto AGI' to be pretty 
 much ready to start earning a living.

 By the end of 2008 we should be ready to actively pursue 
 commercialization in addition to our ongoing RD efforts. At that time 
 we'll be looking for a high-powered CEO to head up our business 
 division which we expect to grow to many hundreds of employees over a few
years.

 Early in 2009 we plan to raise capital for this commercial venture, 
 and if things go according to plan we'll have a team of around 50 by 
 the middle of the year.

 Well, exciting future plans, but now back to work.

 Peter 



 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they will
surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-23 Thread Richard Loosemore

Mark Waser wrote:

he makes a direct reference to goal driven systems, but even more
important he declares that these bad behaviors will *not* be the result
of us programming the behaviors in at the start  but in an MES
system nothing at all will happen unless the designer makes an explicit
decision to put some motivations into the system, so I can be pretty
sure that he has not considered that type of motivational system when he
makes these comments.


Richard, I think that you are incorrect here.

When Omohundro says that the bad behaviors will *not* be the result of 
us programming the behaviors in at the start, what he means is that the 
very fact of having goals or motivations and being self-improving will 
naturally lead (**regardless of architecture**) to certain (what I call 
generic) sub-goals (like the acquisition of power/money, 
self-preservation, etc.) and that the fulfillment of those subgoals, 
without other considerations (like ethics or common-sense), will result 
in what we would consider bad behavior.


This I do not buy, for the following reason.

What is this thing called being self improving?   Complex concept, 
that.  How are we going to get an AGI to do that?  This is a motivation, 
pure and simple.


So if Omuhundro's claim rests on that fact that being self improving 
is part of the AGI's makeup, and that this will cause the AGI to do 
certain things, develop certain subgoals etc. I say that he has quietly 
inserted a *motivation* (or rather assumed it:  does he ever say how 
this is supposed to work?) into the system, and then imagined some 
consequences.


Further, I do not buy the supposed consequences.  Me, I have the 
self-improving motivation too.  But it is pretty modest, and also it 
is just one among many, so it does not have the consequences that he 
attributes to the general existence of the self-improvement motivation. 
 My point is that since he did not understand that he was making the 
assumption, and did not realize the role that it could play in a 
Motivational Emotional system (as opposed to a Goal Stack system), he 
made a complete dog's dinner of claiming how a future AGI would 
*necessarily* behave.


Could an intelligent system be built without a rampaging desire for 
self-improvement (or, as Omuhundro would have it, rampaging power 
hunger)?  Sure:  a system could just modestly want to do interesting 
things and have new and pleasureful experiences.  At the very least, I 
don't think that you could claim that such an unassuming, hedonistic and 
unambitious type of AGI is *obviously* impossible.




I believe that he is correct in that goals or motivations and 
self-improvement will lead to generic subgoals regardless of 
architecture. Do you believe that your MES will not derive generic 
subgoals under self-improvement?


See above:  if self-improvement is just one motivation among many, then 
the answer depends on exactly how it is implemented.


Only in a Goal Stack system is there a danger of a self-improvement 
supergoal going awol.




Omohundro's arguments aren't *meant* to apply to an MES system without 
motivations -- because such a system can't be considered to have goals.  
His arguments will start to apply as soon as the MES system does have 
motivations/goals.  (Though, I hasten to add that I believe that his 
logical reasoning is flawed in that there are some drives that he missed 
that will prevent such bad behavior in any sufficiently advanced system).


As far as i can see, his arguments simply do not apply to MES systems: 
the arguments depend too heavily on the assumption that the architecture 
is a Goal Stack.  It is simply that none of what he says *follows* if an 
MES is used.  Just a lot of non-sequiteurs.


When an MES system is set up with motivations (instead of being blank) 
what happens next depends on the mechanics of the system, and the 
particular motivations.




Richard Loosemore



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] More Info Please

2008-05-23 Thread Richard Loosemore

Peter Voss wrote:

Thanks, Ben.

The technical details of our design and business plan details are indeed
confidential. All I can really say publicly is that we are confident that we
have pretty direct path to high-level AGI from where we are, and that we
have an extremely viable business plan to make this happen. Initial
commercialization next year will utilize the current 'low-grade' version of
our AGI engine that will be able to perform certain tasks that are quite
dumb (in human terms) but still commercially valuable. Our AGI 'brain' can
potentially be utilized in many different kinds of systems/ applications.

More details will probably become available late this year.

Peter

PS. I also have *some* doubts about the ultimate capabilities of our AGI
engine, but probably no greater than yours about NM  :)


Peter,

Interesting:  I wonder if these doubts are the same as the doubts that I 
have about both NM and your own engine?


Doubts, of course, that I do not have about Safaire.

Nevertheless, good luck with your work.



Richard Loosemore



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com