>  > An attractor is a set of states that are repeated given enough time.  If
>  >  agents are killed and not replaced, you can't return to the current state.
>
>  False. There are certainly attractors that disappear, first
>  seen by Ruelle, Takens, 1971 its called a "blue sky catastrophe"
>
>  http://www.scholarpedia.org/article/Blue-sky_catastrophe

Relatedly, you should look at Mikhail Zak's work on "terminal attractors",
which occurred in the context of neural nets as I recall

These are attractors which a system zooms into for a while, then after a period
of staying in them, it zooms out of them....  They occur when the differential
equation generating the dynamical system displaying the attractor involves
functions with points of nondifferentiability.

Of course, you may be specifically NOT looking for this kind of attractor,
in your Friendly AI theory ;-)

-- Ben

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to