The concept of the "dispassionate observer" got me wondering, can
intelligence be decomposed? I think a lot of confusion with regards to
the definition of intelligence comes down to confusion between
intelligent thought/understanding versus intelligent action/behavior.

The understanding-based definition of intelligence: Intelligence is
the accurate modeling of the environment through information attained
by the senses, irrespective of any behavior taken based on that
understanding.

The behavior-based definition of intelligence: Intelligence is
effective goal-seeking behavior within the environment, irrespective
of any model of the environment used to determine that behavior.

It seems to me that if we were to build a system capable of
constructing an accurate model of the environment, goal-seeking
behavior would be relatively trivial to implement on top of this. This
suggests a possible solution to the "friendly AI" problem: Keep the
modeling system physically separate from the goal-seeking system. In
the event the goal seeking system goes awry, throw a kill switch that
prevents it from accessing the modeling system. Without the capability
for understanding, it ceases to behave intelligently, and is
effectively contained.




Aaron Hosford


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to