This is a very interesting conversation. Thank you all for sharing your insights. Perhaps I could add that another thing to consider is that maybe humans don't have what I would call general pattern recognizers. Perhaps our ability to recognize patterns come from pattern recognizing modules that are specialized to work in various domains (ie. visual, time-frequency, audio, conceptual, etc) created through evolution. This would be similar to a pattern recognizer 'trained' by a neural network, lacking generality. Perhaps there are patterns in the world we don't have the mental faculty to detect. In that case, the pattern recognition would be developed through the evolutionary domain, rather than the learning domain.
Now, if this is the case for biological brains, it doesn't mean it is impossible to develop a general pattern recognizer, able to be fed multi-dimensional data (images, audio, internal neural processes, etc) and have it be able to recognize similarities/patterns within the data, all developed during learning. Generally, when we recognize a pattern, we've built a model of the pattern and we're able to make predictions on it. The thing I can't wrap my head around is what kind of structure/architecture or mathematical model would be able to do the recognition? Regards, Jose Ignacio Rodriguez-Labra On Saturday, July 18, 2020 at 8:26:38 PM UTC-4, linas wrote: > > The word "training" is problematic. If you mean "memorize an association > list of pairs" (e.g. faces+text-string) well, technically that is > "training" in the AI jargon file, but it's of little utility for AGI. > > The word "pattern" is problematic. Exactly what a "pattern" is, is ... > tricky. Much (most? almost all?) of my effort is about trying to define > "what is a pattern, anyway". I'm not sure what you had in mind, when you > used that word. (Its a tricky word. Everyone obviously knows what it > means, but how to turn it into an algorithmically graspable "thing"?) > > --linas > > On Sat, Jul 18, 2020 at 6:44 PM Dave Xanatos <[email protected] > <javascript:>> wrote: > >> "If you can't spot the pattern, you've not accomplished anything." >> >> >> >> Every significant – and truly useful - advance I've made on my own >> language apprehension code has been based on recognizing a pattern, and >> coding for it. I fully agree. >> >> >> >> Can a neural network be trained on patterns instead of things? >> >> >> >> Can code designed to recognize – for example, faces (like eigenfaces) – >> be trained to instead recognize blocks of data that look the same, despite >> perhaps being in vastly dissimilar fields? >> >> >> >> Apologies if I'm intruding, or seem to be "out of my lane"… a popular >> buzzword these days. >> >> >> >> Dave – LONG time lurker… >> >> >> >> >> >> >> >> *From:* [email protected] <javascript:> <[email protected] >> <javascript:>> *On Behalf Of *Linas Vepstas >> *Sent:* Saturday, July 18, 2020 6:54 PM >> *To:* link-grammar <[email protected] <javascript:>>; opencog < >> [email protected] <javascript:>> >> *Subject:* [opencog-dev] Re: [Link Grammar] Sutton's bitter lesson >> >> >> >> Well yes. What's truly remarkable is how frequently that lesson has to be >> re-learned. There are vast swaths of the AI industry that still have not >> learned it, and are deluding themselves into thinking that they've made >> bold progress, when they've gotten nowhere at all, and seem blithely >> unaware that they are repeating the same mistake... again. >> >> >> >> I refer, of course, to the deep-learning true-believers. They have made >> the fundamental mistake of thinking that their various network designs >> provide an adequate representation of reality. How little do they seem to >> realize that all that code, running hand-tuned on some GPU is just, and I >> quote Sutton, here: "leveraged human understanding of the special >> structure of chess". Except, cross out "chess" and replace with >> "dimensional reduction" or "weight vector" or whatever buzzword-bingo is >> popular in the deep-learning field these days. >> >> >> >> I'm back again to insisting that "patterns matter". If you can't spot the >> pattern, you've not accomplished anything. Neural nets can't spot patterns. >> They're certainly interesting for various reasons, but, as an AGI >> technology, they are every bit a dead-end as the hand-crafted English >> link-grammar dictionary. >> >> >> >> This is one reason I'm sort of plinking away, working on unfashionable >> things. I'm thinking simply that they are more generic. and more powerful. >> But perhaps the problem is recursive: perhaps I'm just "leveraging my human >> understanding of the special structure of patterns", and will hit a wall >> someday. For now, it seems that my wall is more distant. If only I could >> convince others ... >> >> >> >> --linas >> >> >> >> >> >> On Sat, Jul 18, 2020 at 5:14 PM Paul McQuesten <[email protected] >> <javascript:>> wrote: >> >> Linas, >> >> >> >> I think this reinforces your view of learning from data, instead of >> adding more human-curated rules: >> >> http://incompleteideas.net/IncIdeas/BitterLesson.html >> >> -- >> You received this message because you are subscribed to the Google Groups >> "link-grammar" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected] <javascript:>. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/link-grammar/464d1f92-00b7-4780-870a-2156229b4567o%40googlegroups.com >> >> <https://groups.google.com/d/msgid/link-grammar/464d1f92-00b7-4780-870a-2156229b4567o%40googlegroups.com?utm_medium=email&utm_source=footer> >> . >> >> >> >> -- >> >> Verbogeny is one of the pleasurettes of a creatific thinkerizer. >> --Peter da Silva >> >> -- >> You received this message because you are subscribed to the Google Groups >> "opencog" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected] <javascript:>. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/opencog/CAHrUA36x8QBXGUg4f9BMw5StdhRu1WFjFr_9ySo_vZesMeZrTA%40mail.gmail.com >> >> <https://groups.google.com/d/msgid/opencog/CAHrUA36x8QBXGUg4f9BMw5StdhRu1WFjFr_9ySo_vZesMeZrTA%40mail.gmail.com?utm_medium=email&utm_source=footer> >> . >> >> -- >> You received this message because you are subscribed to the Google Groups >> "opencog" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected] <javascript:>. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/opencog/002701d65d5d%244fdc07d0%24ef941770%24%40xanatos.com >> >> <https://groups.google.com/d/msgid/opencog/002701d65d5d%244fdc07d0%24ef941770%24%40xanatos.com?utm_medium=email&utm_source=footer> >> . >> > > > -- > Verbogeny is one of the pleasurettes of a creatific thinkerizer. > --Peter da Silva > > -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/1e81e6b8-01a1-4283-a404-fef1dd36724fo%40googlegroups.com.
