Wouldn't that be called adaptation with the goal of survival and reproduction? 

------------------------------------------------------------------------------------------------------------------------------------------------
> From: [email protected]
> Date: Fri, 12 Oct 2012 00:12:04 +0200
> Subject: [agi] RiskAI
> To: [email protected]
> 
> I would like to add one more entry to the long list of biologically or
> reality inspired, ALife-like architectures (or at least architectural
> directions and decisions): a cyberagent that primarily and  constantly
> risk-manages its own existence and tries to survive. I will remind you
> that, according to contemporary biology, most (if not all) of the
> diversity and adaptations among species, including cognitive
> adaptations, are a result of reproductive competition - if the
> brighter gene/specimen has even a minuscule reproductive advantage
> over its peers, in the long run it becomes dominant with mathematical
> certainty. It is hard to argue with this reasoning, but it is good to
> point out that survival still outweighs reproduction as an imperative,
> if you cannot make it through infancy and childhood and if you die to
> soon as a "parent" then your reproductive advantages are as good as
> nada.
> 
> Psychosomatic considerations aside, we could divide an organism's risk
> management as conscious and unconscious, the latter corresponding
> perhaps to the immune system and all kinds of resilience built into
> plants and animals so that they do not die at the first hurdle. Still,
> consciousness and culture are ever expanding into the unconscious with
> a variety of feedback loops, we try to boost our immune system, move
> to safe neighborhoods, filter our water etc. In the same fashion, it
> would be desirable for a cyberagent not model risks at its "own" level
> of abstraction, for example by measuring the performance of its
> submodules under certain circumstances, but to incorporate risks such
> as power being unplugged, virus infection, thermonuclear war etc.
> 
> I would not like to assert that human cognition developed to model
> increasingly accurately a rather convoluted "attack surface" of the
> human organism. But I find it infinitely more likely that the
> questioning and myth making human mind initially and for the longest
> time managed risks in its environment, and lately and shortly tackled
> gravity and gravitons. Obviously a very active imagination will be key
> to risk management, and as an added bonus we may get a cyberagent that
> will not "die" when we format the hard disk and unplug the computer -
> uh oh!
> 
> AT
> 
> 
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/19999924-5cfde295
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to