This came up on Twitter:

Deep Learning’s Uncertainty Principle
Carlos E. Perez
https://medium.com/intuitionmachine/deep-learnings-uncertainty-principle-13f3ffdd15ce

An uncertainty principle for grammar. What I've been arguing for 20 years!

Posting it here now, because to me it appears to be the argument I was
making here last year with Linas Vepstas about the learnability of grammar.
Linas had written a paper trying to explain how networks were different to
grammar learning. He observed in his own paper his representations had
"gauge", but dismissed it as unimportant. He concluded learning grammar had
equivalent power to network representation, so work should proceed by
learning grammar alone.

I argue the success of DL has been because it captures part of this
"uncertainty principle" duality.

And that where DL fails, and what we are still missing, is that to fully
capture such duality we need to make the network generative/dynamic. Which
is what I have been arguing here again more recently: the parameters
contradict, and the number of them expands.

-Rob

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1b8c0c3b7933a51f-M8a7fd5da725aae0d02980c40
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to