* Boston Dynamics engineers create physical models
Why do typos mostly only show up after pressing the submit button? :)

On Monday, February 8, 2021 at 3:46:31 PM UTC Patrick Hammer wrote:

> Hi Jose!
>
> Thank you for initiating this interesting discussion!
> I guess there are truths in both Sutton's and Brooks views, as often in AI 
> the reality lies somewhere between the extremes! :)
> Undoubtedly Deep Learning has made obsolete for instance the comparably 
> way less fruitful approach of feature engineering, here I agree with Sutton.
> On the other hand, Brooks has correctly identified that human expertise is 
> now utilized in the design process of the layers, models and loss functions 
> before their parameters are actually optimized. 
>
> Personally I'm quite agnostic to whether "human engineering" versus 
> "offline-optimization within human-defined boundaries", is better, both are 
> just two different paradigms of engineering which can also be combined.
> While offline-optimization (via Supervised DL especially) has taken over 
> in many domains, for some cases explicit engineering is still superior. An 
> example are the famous legged Boston Dynamics robots: Boston Dynamics 
> engineers physical models and throws them into Model Predictive 
> Controllers, instead of applying any Reinforcement Learning. While there is 
> plenty of research in using Reinforcement Learning in legged robots (often 
> in a RL&Control hybrid approach), these solutions don't perform comparably 
> well so far. Part of the reason is that offline-optimization demands an 
> accurate simulation to work out. This is clearly the case for computer 
> games and board games (perfect simulation availability even there), but not 
> so well for systems which need to operate in the real world!
>
> What matters to me personally is not the particular engineering paradigm 
> to create systems for a specific purpose (via offline-optimization and 
> handcrafting), but whether the AI can effectively adapt, at runtime, to new 
> circumstances. That's a big challenge, and is what distinguishes, at a high 
> level, natural evolution from natural intelligence (whether a single 
> individual can adapt, or whether multiple generations are necessary). Most 
> AGI systems, including OpenCog Prime address this quite well in my opinion, 
> and realizing that's the case was a large part of why I was pulled into 
> this wonderful research field!
> Recently, our team has also written a Blog post on this topic, which also 
> addresses the "Generality vs Specialization" issue you have touched on: 
> http://www.opennars.org/blog/post1.html
>
> Best regards,
> Patrick
>
> On Wednesday, November 11, 2020 at 1:42:43 AM UTC Jose Ignacio 
> Rodriguez-Labra wrote:
>
>> There was an earlier thread about Sutton's bitter lesson (link 
>> <http://incompleteideas.net/IncIdeas/BitterLesson.html>), which 
>> basically argues that general machine learning methods are always better 
>> than specialized methods encoded with human knowledge and optimized, which 
>> seemed like most people agreed with. There is a response on it called The 
>> Better Lesson by Rodney Brooks (link 
>> <https://rodneybrooks.com/a-better-lesson/>), pointing out reasons why 
>> Sutton is wrong. I really recommend giving it a read. 
>>
>> It made me think about how using certain concepts that we already know 
>> about the world could actually be useful, rather than building a completely 
>> blank environment and have it learn everything from scratch. Why throw away 
>> all the patterns we've recognized already? Plus we can't rely on the 
>> increase in compute (link 
>> <https://venturebeat.com/2020/07/15/mit-researchers-warn-that-deep-learning-is-approaching-computational-limits/>),
>>  
>> which is integral to general methods, and playing into the process 
>> perpetuates the ever-increasing carbon footprint of the machine learning 
>> industry.
>>
>> There seems to be a duality between these two methodologies: generality 
>> and specialization. Which is the right approach? But they could work 
>> together. By using our human ingenuity and our current understanding of the 
>> brain, maybe we could build a specialized, but limited version of human 
>> intelligence, to then use to create a general intelligence. Perhaps a truly 
>> general method for building human intelligence is a task belonging to a 
>> post-singularity world. How else could we overcome such a large problem 
>> space?
>>
>> What do you think? is there any merit to this,? Or I am just not 
>> experienced enough? 
>> Maybe I should stop thinking so much and get coding.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/c843cab6-d4d9-4642-9c97-59c2b160beddn%40googlegroups.com.

Reply via email to