The basics of survival are to survive and... 
Reproduce the next generation, 

Protect and teaching your young as much knowledge that bestows them the 
highest probability to survive and reproduce the next generation. 

This includes knowing who and what will harm you and your family and 
setting up defensive and or offensive methods to drive off or destroy the 
threat. 
 
The functions of a brain with the ability to store information... 
Knowing where the fruit is in the forest, 
Knowing when it is ripe. 
 
Evolve to a higher life form.

Evolution is the adaptation of the next generation to its environment.  

The better the next generation can adapt to the society the better and the 
more likely that generation will produce more offspring. 

Other axioms exist... 
If your parents don’t have children neither will you.... 

Insanity is hereditary, you can get it from your children... 

There are no facts, only interpretations.  
     
Dan Goe


----------------------------------------------------
>From : Charles D Hixson <[EMAIL PROTECTED]>
To : [email protected]
Subject : Re: [agi] Friendly AI in an unfriendly world... AI to the 
future socieities.... Four axioms (WAS Two draft papers  . . . .) 
Date : Sat, 10 Jun 2006 14:33:43 -0700
> [EMAIL PROTECTED] wrote:
> > If your AI was operating on the web it might find itself at a sever 
> > disadvantage with all of those con artist... 
> >
> > Your AI might lose bad... 
> >   
> Friendly does not equal trusting.  It does not equal stupid.  It does
> not equal "not being willing to learn from the experiences of others".
> > While being friendly might be nice.
> > I think that is a vulnerable position to being taken advantage of...
> >   
> I do not see this vulnerable position that you are talking about.  Could
> you be more explicit?  I suspect that you are attributing
> characteristics to a FAI that are not inherent to it.
> > If you are in a war game-simulation or real time there might not be 
any 
> > friendly position to those hostile to your wellbeing. 
> >   
> Yes.  Many job positions occupied by humans are not suitable jobs to be
> done by a FAI.  This generally means that the situations which create
> such jobs need to be redesigned.
> > Knowing your friends should be top priority. 
> >
> > Destroy or be destroyed might be the only way to survive.
> >   
> It is definitely possible for there to be environments within which an
> FAI is not adapted to survive.  But given the consequences this does not
> suffice to mean that constructing some other variety of AI is a better
> choice.
> > There maybe moles or Trojans even within your domain. 
> >
> > Spies may be anyone or everyone. 
> >
> > I do not believe any AI would want to be friendly to anyone building 
> > armaments that are pointed at your cities, ships or bases... or those 
that 
> > threaten to cut off the flow of your oil. 
> >   
> AIs only have the instincts and goals that they are built with.  They
> are not, e.g., inherently territorial.  They do not even inherently want
> to survive.  If you want them to have the goal of surviving, then you
> must include that among their goals.  You appear to be presuming that an
> FAI would have the same goals and purposes that you do.  This is as
> unreasonable as attributing such a set of goals to BlueGene.  (And
> people often did this before computers became common.)  It's not quite
> as unreasonable as attributing such a set of goals to your car, as an
> any AI *will* have goals.  But just exactly *what* the goals are is
> significantly important.
> (Others on the list believe that my evaluation of goals is overly
> simplified, and they may well be right.  The goals my be much more
> complex in nature...but they won't be the same as those of a human, or
> even as closely similar as those of a goldfish.  Goldfish are
> territorial sexually reproducing animals.  And we derive from common
> ancestry.   We diverged a long time ago, but there are still a multitude
> of similarities that would not exist in an independent creation such as
> an AI.  Aphids or ants *might* be sufficiently divergent to be equally
> divergent...)
> 
> OTOH, we are going to want any AI that we create to understand us.  This
> means that the AI will need some way of modeling our goals, purposes,
> etc.  This implies that the AI will be able to EMULATE a goal structure
> similar to ours.  There is, however, a tremendous difference between
> emulating a goal structure and operating off of it.
> 
> 
> -------
> To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
> please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to