Re: [agi] Breaking AIXI-tl

2003-02-14 Thread Michael Roy Ames
Eliezer S. Yudkowsky asked Ben Goertzel:

  Do you have a non-intuitive mental simulation mode?


LOL  --#:^D

It *is* a valid question, Eliezer, but it makes me laugh.

Michael Roy Ames
[Who currently estimates his *non-intuitive mental simulation mode* to
contain about 3 iterations of 5 variables each - 8 variables each on a
good day.  Each variable can link to a concept (either complex or
simple)... and if that sounds to you like something that a trashed-out
Commodore 64 could emulate, then you have some idea how he feels being
stuck at his current level of non-intuitive intelligence.]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Michael Roy Ames
Arthur T. Murray wrote:

 [snippage]
 why should we creators of Strong AI have to take any
 more precautions with our Moravecian Mind Children
 than human parents do with their human babies?


Here are three reasons I can think of, Arthur:

1) Because we know in advance that 'Strong AI', as you put it, will be
very much smarter and very much more capable than we are - that is not
true in the human scenario.

2) If we don't get AI morality right the first time (or very close to
it), its game over for the human race.

3) Attempting to develop 'Strong AI' without spending time getting the
morality-bit correct, may cause a governmental agency to squash you like
a bug.

And I didn't even have to think very hard to come up with those... I'm
sure there are other reasons.  Could you articulate the reasons why you
think the 'quest' is hopeless?

Michael Roy Ames


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Michael Roy Ames
Brad Wyble wrote:
 I don't think any human alive has the moral and ethical underpinnings
 to allow them to resist the corruption of absolute power in the long
 run.


I am exceedingly glad that I do not share your opinion on this.  Human
altruism *is* possible, and indeed I observe myself possessing a
significant measure of it.  Anyone doubting thier ability to 'resist
corruption' should not IMO be working in AGI, but should be doing some
serious introspection/study of thier goals and motivations. (No offence
intended, Brad)

Michael Roy Ames


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Michael Roy Ames
Brad Wyble wrote:

 Under the ethical code you describe, the AGI would swat
 them like a bug with no more concern than you swatting a mosquito.


I did not describe an ethical code, I described two scenarios about a
human (myself) then suggested the non-bug-swatting scenario was
possible, analogically, for an AGI.



 All I'm trying to do is shift the focus for a few moments to our own
 ethical standards as people.  If we were put into the shoes of an
 AGI, would we behave well towards the inferior species?


I presume from the phrase If we were put into the shoes of an AGI that
human morality and ethics would come along for the ride.  If that is
what you meant: then it depends on which human you pick as to what
happens.  I have observed both altruism and cruelty, obsession and
indifference in human behaviour toward other species.  It bears some
thinking about just exactly what one would do in such a situation... I
know I have often thought about it.



 Philip brings up the point that a community AGI's could possibly
 self-police.  I agree.


I don't.  Policing is only useful/meaningful within a community of
almost equal actors that have very little real power.  If the actors are
not almost equally powerful then you have the 'human and a bug'
scenario.  If the actors have a very large amount of power, then a
single 'transgression' could wipe us all out before any 'policing
action' could be initiated.



 Nor, would one presume, on an AGI's.  They might end up with it
 anyway.


I would not presume that so readily.  Taking it as a given that we are
discussion a Friendly AGI, I would say that there would be significant
utility in obtaining a great deal of power.  Not to 'Lord it over the
petty humans', but to protect them both internal and external threats.


Michael Roy Ames


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Re: Games for AIs

2002-12-12 Thread Michael Roy Ames
Tony,

Thanks for sharing your ideas (sorry for the erroneous naming of
shape-world).  We seem to agree that the lessons (for an AI) would need to
start of very simple, and gradually build up mental tools  techniques, by
using many different games to build slightly different aspects of cognition.

The idea of putting a baby AI in a simulated world where it might learn
cognitive skills is appealing.  But I suspect that it will take a huge
number of iterations for the baby AI to learn the needed lessons in that
situation.  I think it will be faster to give more constrained and
structured learning first, then when the AI is capable of understanding the
'game world' and the 'game rules' and the 'game interface' it could play
Video games with the intention of *discovering* how it all works together.
This is often what humans find interesting about video games: the discovery
aspect.  And this would be a valuable new skill to develop: given this
World, these Rules and this Interface, discover A, B, C.  Where A, B, C
could be many different things.  Eg: Reach the highest level possible; Stay
alive the longest; Rack-up the most points; Rack-up the least points without
dying...  But this kind of excercise is only going to be useful for mental
development once the AI has very significant capabilities.  I might almost
say that, by the time video-game-playing becomes beneficial to mental
development, the AI would be largely self-directed.

Hey! Alan!  Wanna play Duke Nukem?
Okay Michael.  But I'm going to win this time.
Oh, really?  How do you know.
Well, its a bit complicated to explain.  Why don't I just show you?
(gulp)

Michael Roy  Ames


Tony Lofthouse wrote:
 Michael,

 You wrote:

 Tony Lofthouse: I've heard you are working on the shape-world
 interface.  Have you considered what games we might play in it?
 Ideas?

 To clarify this point. I am currently developing a 2D input capability
 for Novamente. It is a very crude form of vision that allows the
 presentation of (x, y) time series to the system. This should not be
 confused with the shape-world interface mentioned above. Whilst one
 may lead to the other shape-world is not the current focus.

 Having said this I do have a couple of comments relating to AI games.
 Those of you who have had the opportunity to raise children will no
 doubt be well aware of the fact that children don't play TLoZ (or
 contemporary equivalent) until well into their childhood.

 There are many stages of learning before a child is capable of this
 level of sophistication. One of the first games that young children
 play is the categorisation game, i.e. What shape is this?, what
 colour is this?, how may sides?, etc. I would expect to use the 2D
 world and Shape-world subsequently for the same purpose. This is
 followed by the comparison game, i.e. is this big?, is this small?,
 which is bigger?, etc. Then you have the counting game (sort of
 obvious). The relationship game, i.e. above, below, inside, outside.
 There are lots of these type games!

 Then you move on to the reasoning game, i.e. what comes next?, what is
 missing?, what is the odd one out?, etc.

 Now the child is ready to combine learning from these different games
 and moves on to story telling both listening to them and then telling
 them.

 Then there are several more years of honing these key skills whilst
 increasing the level of world knowledge and social understanding.

 Finally the child is ready to play TLoZ!

 So as you can see I think there is a lot to do before you get to play
 TLoZ with your baby AGI. That is the purpose of 2d World and then
 Shape-World.

 T



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] introduction

2002-12-10 Thread Michael Roy Ames
Damien Sullivan wrote:
 Hi!  I joined this list recently, figured I'd say who I am.  Well,
 some of you may know already, from extropians, where I used to post a
 fair bit :) or from my Vernor Vinge page.  But now I'm a first year
 comp sci/cog sci PhD student at Indiana University, hoping to work on
 extending Jim Marshall's Metacat in Hofstadter's lab.  Nothing much
 has really happened beyond hope and a few meetings and taking his
 group theory class.  I've been reading Eliezer's _Levels_ pages, and
 having Andy Clark's _Being There_ around, but mostly my life has been
 classes.  Mostly the OS class, actually.  Sigh.

 -xx- Damien X-)

 ---
 To unsubscribe, change your address, or temporarily deactivate your
 subscription, please go to
 http://v2.listbox.com/member/?[EMAIL PROTECTED]


Damien,

Hi.  I'm am quite interested in Jim's Metacat also.  It's on my To-Do list
to get it running under linux... but the way my workload is going I think
Jim will get his planned re-write done first. :)It would be interesting
to hear about what new directions Metacat is going in.  Welcome to the list.

Michael Roy Ames

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]