On Mon, Jun 19, 2017 at 2:11 PM, Hugo Latapie (hlatapie) <[email protected] > wrote:
> *arXiv:1702.00764* I've just barely started reading that, and from the very beginning, its eminently clear how even the latest, leading research on deep neural nets is profoundly ignorant of grammar and semantics. Which I think is another reason why the direction we're on is so promising: apparently, just about exactly zero of the researchers in one area are aware of the theory and results of the other. Which I guess is a good thing for me, But its really really hard to read that paper, and not want to scream at the top of my lungs, "those ding-a-lings, don't they know about result xyz? what's wrong with them? Are they all ignorami?" and yet, it seems to be a giant pyramid of results built on results demonstrating a lack of knowledge, education about form and structure. So it's a bit hard to take seriously, and yet, everyone who is interested in deep learning seems to be doing just that... its remarkable. --linas -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA37cMLH%2BzEcrKWa4UGpiixxVB5cXcTXA-zV_6Sqis8g6ug%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
