Deepmind AI recently moved towards some kind of research akin to be a
challenge for the utopian part in AGI - the 'autopiesis' of learning.
They call it 'meta-reinforcement learning' (and also introduced the
'agent' concept) but it will probably lead no results either. But there
can be no agency if there is no noogenesis. We usually call it
'free-will' in decision making, but it technically only is the agency of
emergent intelligence.
This could make almost evident the need to define learning as a system
(designed with the Viable System Model approach) instead of a process.
The product of the system should be noogenesis and could be organized in
a manner similar to the ontologies of the system of knowledge.
On 03.03.2019 01:49, [email protected] wrote:
AlphaZero: DeepMind’s AI Works Smarter, not Harder
https://www.youtube.com/watch?v=1gWpFuQlBsg
*Artificial General Intelligence List
<https://agi.topicbox.com/latest>* / AGI / see discussions
<https://agi.topicbox.com/groups/agi> + participants
<https://agi.topicbox.com/groups/agi/members> + delivery options
<https://agi.topicbox.com/groups/agi/subscription> Permalink
<https://agi.topicbox.com/groups/agi/T9a89bcc2ec4674d6-M0b7a6523e72f7fc4c8e86d7a>
------------------------------------------
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T9a89bcc2ec4674d6-M932c3926a1b84e9158d76d14
Delivery options: https://agi.topicbox.com/groups/agi/subscription