I think the problem with the whole neural network approach is that it
doesn't teach us anything. By using infinite computing resources, certain
problems can be solved (although NN are never even error-free!), but what
next? It still seems so fake.

On Sun, 27 Oct 2019 at 20:51, <immortal.discover...@gmail.com> wrote:

> "Many recent works focus on using expensive reinforcement learning (RL)
> methods to solve this problem (Sermanet et al., 2018; Liu et al., 2017;
> Peng et al., 2018; Aytar et al., 2018). In contrast, high-fidelity
> imitation in humans is often cheap: in one-shot we can closely mimic a
> demonstration. Inspired by this, we introduce a meta-learning approach
> (MetaMimic — Figure 1) to learn high-fidelity one-shot imitation policies
> by off-policy RL. These policies, when deployed, require a single
> demonstration as input in order to mimic the new skill being demonstrated."
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T10119d5c27aad6be-Mc65e40a07ad2032c2be30416>
>


-- 
Stefan Reich
BotCompany.de // Java-based operating systems

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T10119d5c27aad6be-M3d111be4b8e5bd3beeb5b27d
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to