Hi Dave, On Sat, Jul 18, 2020 at 10:57 PM Dave Xanatos <[email protected]> wrote:
> I appreciate your response. Honestly, I have been finding that as of > late, I have been coming up against these "definition" issues with greater > frequency. > :-) > > > I don't claim to be an expert. I'm probably – mostly – an idiot in this > field, but I did figure out a way to discern if the utterance of a speaker > (human) is a question or a statement, with 98.4% accuracy. It even can > detect an indirect question as well as a direct question. > I'm not sure how to respond to that. It sounds like you are plugging a product of some sort. These days, I am more interested in abstractions, and so whenever I see something like "98.4% accurate" I quickly scurry in the opposite direction. I'm pretty sure that no poet, and no mathematician ever claimed that their work was 98,4% accurate. (So, math is just poetry, it's free-verse that has to rhyme. 😃😃 See Lockhart's Lament for details. https://www.maa.org/external_archive/devlin/LockhartsLament.pdf ) > > > I found … a pattern – in English speech, and I coded it. > > > > Can my robots **answer** the question when it is detected – mostly no. > But they can identify the utterance better than anything I've tried from > any other source. > > > > Beyond this, my last message to you regards recognizing patterns in data, > the way scripts recognize patterns in an image. Often they are all just > three dimensional matrices. > > > > For example, would it be possible to "train" (hold on here… lol) a net to > recognize a given data pattern, then have it look at different > databases/data lakes/wads of random data… and recognize if that particular > data pattern existed in those other regions? > Yeah, I think that was the claim that the neural nets folks started to make in the 1980's (or earlier), and you can safely say that they have fully made good this claim, and if not, you haven't spent enough time with tensorflow. > > > I apologize if I'm oversimplifying things. I'm imagining that data > structures would share a common architecture across disparate fields, and > may be recognized in this manner. > > > > Again, I may be overstepping my experience, and I apologize for wasting > your time if so. I have had some very rewarding experiences with language > parsing/understanding based on coding for patterns that were simply > determined from my own neurology/intuitions as a native speaker of the > language I code in (English). My intuitions tell me that there are > possibly ways to view data in the same way as we view images, and to > recognize a "data image" in much the same way. > > > > You are correct, I believe, that neural nets can't spot patterns. But > humans can. That seems to be one thig we do really well. But I believe if > we feed neural nets examples of patterns, instead of things – can we come > up with something new? > > > > I wish I had better words here. While I can see the structures I am > referring to, I can't seem to really articulate them… > I think Richard Feynmann had a quotable quote about exactly that. > > > Feel free to tell me to go back to Comp Sci 101 if I am not offering > anything here, I won't be offended 😊 > One of the easiest ways of being offensive is to imply, even in a very round-about fashion, that someone-else's life-work is unimportant, useless, or worse - .. as Hillary so finely put it: "deplorable". Good thing we didn't elect her, eh? In science, athletics and hollywood stardom, it is very easy for one scientist/athlete/star to imply that the work of another is wrong, trite, substandard, obvious, or "even a child could do that". And there lies the fountainhead of rivalry. > > > I love what you are all doing here. I spend a lot of time imagining > cognitive architectures and methods of creating something akin to genuine > "understanding" in a coded base…. > Well, when you say "cognitive architecture", I guess you mean this: https://xanatos.com/airobotics.asp and in that sense, I think that this is also where Ben started out, long ago (and I think he is still there, to some degree) -- that, if you just wire together some pieces-parts in some clever way, the way you might build an automobile or an airplane, it will just locomote or fly (or, in this case, "think"). I've forged some kind of working relationship with Ben, and others, even though I whole-heartedly reject the fundamental cornerstone, the idea behind "cognitive architecture". It's a little bit like trying to build an atom bomb by trying to figure out, to "reverse engineer" how to pack 100 tonnes of TNT into a box no bigger than a basketball. To me, it's a mis-perception of the task at hand. It's Sutton's bitter lesson, in a somewhat more disguised form. --linas > > Feel free to tell me to shut up and leave you alone 😊 > > > > Dave > > > > > > > > *From:* [email protected] <[email protected]> *On Behalf Of > *Linas Vepstas > *Sent:* Saturday, July 18, 2020 8:26 PM > *To:* opencog <[email protected]> > *Cc:* link-grammar <[email protected]> > *Subject:* Re: [opencog-dev] Re: [Link Grammar] Sutton's bitter lesson > > > > The word "training" is problematic. If you mean "memorize an association > list of pairs" (e.g. faces+text-string) well, technically that is > "training" in the AI jargon file, but it's of little utility for AGI. > > > > The word "pattern" is problematic. Exactly what a "pattern" is, is ... > tricky. Much (most? almost all?) of my effort is about trying to define > "what is a pattern, anyway". I'm not sure what you had in mind, when you > used that word. (Its a tricky word. Everyone obviously knows what it > means, but how to turn it into an algorithmically graspable "thing"?) > > > > --linas > > > > On Sat, Jul 18, 2020 at 6:44 PM Dave Xanatos <[email protected]> wrote: > > "If you can't spot the pattern, you've not accomplished anything." > > > > Every significant – and truly useful - advance I've made on my own > language apprehension code has been based on recognizing a pattern, and > coding for it. I fully agree. > > > > Can a neural network be trained on patterns instead of things? > > > > Can code designed to recognize – for example, faces (like eigenfaces) – be > trained to instead recognize blocks of data that look the same, despite > perhaps being in vastly dissimilar fields? > > > > Apologies if I'm intruding, or seem to be "out of my lane"… a popular > buzzword these days. > > > > Dave – LONG time lurker… > > > > > > > > *From:* [email protected] <[email protected]> *On Behalf Of > *Linas Vepstas > *Sent:* Saturday, July 18, 2020 6:54 PM > *To:* link-grammar <[email protected]>; opencog < > [email protected]> > *Subject:* [opencog-dev] Re: [Link Grammar] Sutton's bitter lesson > > > > Well yes. What's truly remarkable is how frequently that lesson has to be > re-learned. There are vast swaths of the AI industry that still have not > learned it, and are deluding themselves into thinking that they've made > bold progress, when they've gotten nowhere at all, and seem blithely > unaware that they are repeating the same mistake... again. > > > > I refer, of course, to the deep-learning true-believers. They have made > the fundamental mistake of thinking that their various network designs > provide an adequate representation of reality. How little do they seem to > realize that all that code, running hand-tuned on some GPU is just, and I > quote Sutton, here: "leveraged human understanding of the special > structure of chess". Except, cross out "chess" and replace with > "dimensional reduction" or "weight vector" or whatever buzzword-bingo is > popular in the deep-learning field these days. > > > > I'm back again to insisting that "patterns matter". If you can't spot the > pattern, you've not accomplished anything. Neural nets can't spot patterns. > They're certainly interesting for various reasons, but, as an AGI > technology, they are every bit a dead-end as the hand-crafted English > link-grammar dictionary. > > > > This is one reason I'm sort of plinking away, working on unfashionable > things. I'm thinking simply that they are more generic. and more powerful. > But perhaps the problem is recursive: perhaps I'm just "leveraging my human > understanding of the special structure of patterns", and will hit a wall > someday. For now, it seems that my wall is more distant. If only I could > convince others ... > > > > --linas > > > > > > On Sat, Jul 18, 2020 at 5:14 PM Paul McQuesten <[email protected]> > wrote: > > Linas, > > > > I think this reinforces your view of learning from data, instead of adding > more human-curated rules: > > http://incompleteideas.net/IncIdeas/BitterLesson.html > > -- > You received this message because you are subscribed to the Google Groups > "link-grammar" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/link-grammar/464d1f92-00b7-4780-870a-2156229b4567o%40googlegroups.com > <https://groups.google.com/d/msgid/link-grammar/464d1f92-00b7-4780-870a-2156229b4567o%40googlegroups.com?utm_medium=email&utm_source=footer> > . > > > > -- > > Verbogeny is one of the pleasurettes of a creatific thinkerizer. > --Peter da Silva > > -- > You received this message because you are subscribed to the Google Groups > "opencog" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/opencog/CAHrUA36x8QBXGUg4f9BMw5StdhRu1WFjFr_9ySo_vZesMeZrTA%40mail.gmail.com > <https://groups.google.com/d/msgid/opencog/CAHrUA36x8QBXGUg4f9BMw5StdhRu1WFjFr_9ySo_vZesMeZrTA%40mail.gmail.com?utm_medium=email&utm_source=footer> > . > > -- > You received this message because you are subscribed to the Google Groups > "opencog" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/opencog/002701d65d5d%244fdc07d0%24ef941770%24%40xanatos.com > <https://groups.google.com/d/msgid/opencog/002701d65d5d%244fdc07d0%24ef941770%24%40xanatos.com?utm_medium=email&utm_source=footer> > . > > > > -- > > Verbogeny is one of the pleasurettes of a creatific thinkerizer. > --Peter da Silva > > -- > You received this message because you are subscribed to the Google Groups > "opencog" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/opencog/CAHrUA368NNEmjBsg_09%3DG7yJOdT2Ur%3DBMYvEZPFh2k_HiWNx7w%40mail.gmail.com > <https://groups.google.com/d/msgid/opencog/CAHrUA368NNEmjBsg_09%3DG7yJOdT2Ur%3DBMYvEZPFh2k_HiWNx7w%40mail.gmail.com?utm_medium=email&utm_source=footer> > . > > -- > You received this message because you are subscribed to the Google Groups > "opencog" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/opencog/004001d65d80%24c71a8790%24554f96b0%24%40xanatos.com > <https://groups.google.com/d/msgid/opencog/004001d65d80%24c71a8790%24554f96b0%24%40xanatos.com?utm_medium=email&utm_source=footer> > . > -- Verbogeny is one of the pleasurettes of a creatific thinkerizer. --Peter da Silva -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA361b50ajTk-6ZxLj6D9-joDfHNNY8FVPPvawMsUG0smPg%40mail.gmail.com.
