Arthur,

I disagree with your 'strong' dismissal of a viable AGI Morality and Ethics
System, leer an 'AGI MES'.

My reasoning is that, by definition, AGI systems must be developed in a
complex environment to solve complex problems, or risk complete irrelevance
all together.  So, if we assume that the AGI has a certain level of
'intelligence' or more accurately an 'advanced' approach to complex problem
solving, that involves an ability to identify, abstract, and cognitively
manipulate patterns of behavior - both its own and those of others...note: a
sense of self-awareness - in the complex environment, then the AGI is be
forced to develop and/or optimize a set of internal rules or guidelines to
efficiently effectuate its interactions in the complex environment.  Game
theoretic considerations suggest that, in such a case, the AGI seeks to
optimize the effectuation of its behavior patterns which are ultimately
characterized by the allocation of its limited resources like time,
materiel, and energy (effort).

The point is that with a reasonably 'intelligent' AGI, some kind of an 'AGI
MES' is assured and can be guided through 1. a complex environment, 2. the
behavioral pattern identification, abstraction, and manipulation, and 3.
game theoretic considerations of  allocation of its limited resources like
time, materiel, and energy (effort).  Furthermore, it seems to me that the
degree to which we can ultimately influence the 'AGI MES' vis-a-vis the
'HUMAN MES' is closely correlated to the degree to which we can liken the
three important ingredients for developing the MES starting with 1. likening
the two complex environments to one another, i.e. the complex human
environment to the complex AGI environment.

In any case, Arthur, I laud your approach to AI incubation and development.
I've followed your work since 1999 (check your personal email) and believe
that you are a real pioneer in your 'open kimono' approach to AI which with
open venues for exchange of ideas like this mail list, will ultimately shed
light on how to effectively deal with such an AGI MES.

Just my $0.02 worth before you run off and enjoy your mountain day-trip.

EGHeflin

PS Good luck with your "Zuno no Gakusetsu"  link to "AI4U" on
www.amazon.co.jp.  Did you write the Japanese?  Kore-wa honto-ni sugoi desu!

----- Original Message -----
From: "Arthur T. Murray" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, February 12, 2003 12:23 PM
Subject: Re: [agi] AI Morality -- a hopeless quest


>
> On Wed, 12 Feb 2003, Michael Roy Ames wrote:
>
> > Arthur T. Murray wrote:
> > >
> > > [snippage]
> > > why should we creators of Strong AI have to take any
> > > more precautions with our Moravecian "Mind Children"
> > > than human parents do with their human babies?
> > >
> >
> > Here are three reasons I can think of, Arthur:
> >
> > 1) Because we know in advance that 'Strong AI', as you put it, will be
> > very much smarter and very much more capable than we are - that is not
> > true in the human scenario.
> >
> > 2) If we don't get AI morality right the first time (or very close to
> > it), its "game over" for the human race.
> >
> > 3) Attempting to develop 'Strong AI' without spending time getting the
> > morality-bit correct, may cause a governmental agency to squash you like
> > a bug.
> >
> > And I didn't even have to think very hard to come up with those... I'm
> > sure there are other reasons.  Could you articulate the reasons why you
> > think the 'quest' is hopeless?
> >
> > Michael Roy Ames
> >
> ATM:
> The quest is as hopeless as it is with human children.
> Although Bill Hibbard singles out "the power of super-intelligence"
> as the reason why we ought to try to instill morality and friendliness
> in our AI offspring, such offspring are made in our own image and
> likeness:  receptive to parental ideas, but ultimately on their own.
>
> DISCLAIMERS
> - In less than one hour I will go on a mountain day-trip
>   and not be on-line to answer even the most personal queries.
> - Your extremely high-powered lurkership and your AGI regulars
>   are a lot sharper than I am (it takes a "village"-idiot to raise
>   http://mind.sourceforge.net/jsaimind.html an AGI child :-)
>   so I am not well equipped to debate the philosophic fine points.
> - My AI is lo! these several years already out there (unlike the
>   annoyingly SECRET projects of Ben Goertzel, Voss, Lenat et al.,
>   and so I am propagandizing the Mentifex AI to the hilt, viz.,
>   a few minutes ago in Usenet news:rec.arts.ascii et al-ng I posted
>   Subject: [PIC] Zuno no Gakusetsu with a link to "AI4U" at
>   http://www.amazon.co.jp/exec/obidos/ASIN/0595259227 in Japan.
>
> The race is on.  Let us try to rear decent, friendly AI children,
> but ultimately we have no control over them.
>
> Arthur T. Murray
> --
> http://www.scn.org/~mentifex/theory5.html -- AI4U Theory of Mind;
> http://www.scn.org/~mentifex/jsaimind.html -- Tutorial "Mind-1.1"
> http://www.scn.org/~mentifex/mind4th.html -- Mind.Forth Robot AI;
> http://www.amazon.com/exec/obidos/ASIN/0595259227/ -- book "AI4U"
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to