That is not quite true. Each game could be reduced to a conveniently finite number of reactions and principles. So if someone wanted to waste his time he could create a simple physics-like modelling program that could learn to play the games. The complexities could be refined or reduced to a relatively simple set of actions. And the computer programs could react instantaneously compared to a person. Human beings are monitoring the things that are going on around them - to some degree of preparedness - while the computer program is not able to do that. The fact that the program had some generality is interesting, but I think the success is due to the relative simplicity and underlying similarities, relative to their realm of artificial physics, of the different games. Jim Bromer
On Thu, Sep 13, 2018 at 6:07 PM EdFromNH via AGI <[email protected]> wrote: > > If Demis Hassabis, the current leader of Google's DeepMind AI subsidiary, was > able several years ago to create an artificially intelligent program that > could learn to play each of many different video games much better than > human players -- just from feedback from from playing each such game -- his > program obviously had to be able to model the causal inference inherent in > whatever videogame it was learning. So obviously there already has been a lot > of success in AI's being able to do a good job at automatically learning > causal inference. > > On Thu, Sep 13, 2018 at 3:45 PM Nanograte Knowledge Technologies via AGI > <[email protected]> wrote: >> >> Most interesting. Thanks for sharing. From the little I understand about >> this large, body of work, this makes sense to me. However, I would contend >> that by adopting - what is called by some - a network structure (closing >> loops in a 3-entity structure) would lead to confusing results. >> >> For example, one cannot reliably infer a vertex from that, which may then >> skew the rest of the structural results. . I think it's a classical "copout" >> in systems design; when in doubt, then to close the loop to open the >> associative option i.e., A=> B and C and B => C. Result: A indirectly >> causing C, but it was already inferred that A directly caused C. Did it, or >> didn't it? >> >> This would present as a self-made paradox, not so? >> >> >> ________________________________ >> From: Robert Levy via AGI <[email protected]> >> Sent: Thursday, 13 September 2018 10:08 PM >> To: AGI >> Subject: [agi] Judea Pearl on AGI >> >> I don't think I've seen a discussion on this mailing list yet about Pearl's >> hypothesis that causal inference is the key to AGI. His breakthroughs on >> causation have been in use for almost 2 decades. The new Book of Why, other >> than being the most accessible presentation of these ideas to a broader >> audience, is interesting in that it expressly goes into applying causal >> calculus to AGI. > > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T0f9fecad94e3ce7e-M8e75599773bc8eee6da3ebda Delivery options: https://agi.topicbox.com/groups/agi/subscription
