You undervalue the degree to which research is an ideas market, Matt. This entire current AI boom is the result of the one simple, universal breakthrough. Progress was flat for years before that (winter), and has been since.
Of course "flat" is relative. The old, single, universal breakthrough of HMM's were applied with more power, and gradually crept forward in performance, successively touted as "solved". But it wasn't solved. It needed a new idea. As it still needs other ideas. Most development is a matter of having the resources to push ideas forward it is true. But that doesn't mean ideas don't count. Google's sudden embrace of neural nets around 2012 was a vindication for those of us who had argued distributed representation for years. But Google didn't make it happen. It's only to their credit that they were the first to grab what others had done once its potential became really impossible to ignore. There must be better ways to allocate resources to research. It's certainly a disgrace that the potential of back-propagation was finally realized only as something of a side effect, because people were playing a lot of computer games. Google were tooling around with Bayesian stats. It's not only cranks without resources who are wrong all the time. Google invested resources poorly until the hard work on back-prop had been done. I well remember the first AI MOOC (2011?) when distributed representation was not mentioned (for as long as I attended.) Since video games accidentally opened that door, corporate resources have dominated again, sure. But the problem isn't solved. Back-prop has been just one idea. We're waiting for the next accidental confluence of ideas and resources, to give those with resources their next cat video moment. It's a pity we can't move that confluence of ideas and resources forward more rapidly than the accidental. Or at least optimize the accidental by making more bets. But as the example of video games and GPU's show, the right resources are still mostly allocated only by accident. Poor allocation of resources to ideas is holding up all research, not only AI. Ideas matter. -Rob On Mon, Jul 15, 2019 at 9:00 AM Matt Mahoney <[email protected]> wrote: > On Sat, Jul 13, 2019, 6:43 PM Basile Starynkevitch < > [email protected]> wrote: > >> But you forgot the difference between AI & AGI. >> > AGI is lots of narrow AI working together. It's not the simple, universal > breakthrough you would like to have. It's the one we have to have because > Legg proved that powerful predictors are necessarily complex. > https://arxiv.org/abs/cs/0606070 > > Google OCR and language translation makes mistakes, but it works better > than last year and will work better next year. There isn't a point in time > when we will have AGI because you can't compare human and machine > intelligence. > >> *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + delivery > options <https://agi.topicbox.com/groups/agi/subscription> Permalink > <https://agi.topicbox.com/groups/agi/Tbee4f4d703cc71ad-M4e273a7cbcdac4aead83d65b> > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tbee4f4d703cc71ad-Maa5e42d54f508fed92f03332 Delivery options: https://agi.topicbox.com/groups/agi/subscription
