Another voice from a long-time lurker: patterns are only relevant if they can be recognised by humans. Some can, others can't. E.g. I don't think humans are good at recognising that cba is the reverse of abc, but I imagine a machine might well pick it up. So maybe you need a mixture of hand-crafting for the pattern types and uncontrolled searching for those patterns.

One refinement of this idea is that purely reactive learning could turn into proactive learning as you learn what to expect. In language, you learn that /big/ often stands before /book/, and that words which in other ways are like /big/ often stand before words that are otherwise like /book/, so you generalise to adjectives standing before nouns; then whenever you hit a word which you think is an adjective, you actively _look for_ its noun - a very different learning strategy from purely statistical learning. These expectations gradually turn into a grammar, where you can talk about dependencies and dependency types (e.g. subjects versus objects). I know that sounds like hand-crafting creeping back in through the back door, but all that's hand-crafted is your initial set of pattern types.

Best wishes for your thinking. Dick

On 19/07/2020 01:26, Linas Vepstas wrote:
The word "training" is problematic. If you mean "memorize an association list of pairs" (e.g. faces+text-string) well, technically that is "training" in the AI jargon file,  but it's of little utility for AGI.

The word "pattern" is problematic. Exactly what a "pattern" is, is ... tricky. Much (most? almost all?) of my effort is about trying to define "what  is a pattern, anyway". I'm not sure what you had in mind, when you used that word.  (Its a tricky word. Everyone obviously knows what it means, but how to turn it into an algorithmically graspable "thing"?)

--linas

On Sat, Jul 18, 2020 at 6:44 PM Dave Xanatos <[email protected] <mailto:[email protected]>> wrote:

    "If you can't spot the pattern, you've not accomplished anything."

    Every significant – and truly useful - advance I've made on my own
    language apprehension code has been based on recognizing a
    pattern, and coding for it.  I fully agree.

    Can a neural network be trained on patterns instead of things?

    Can code designed to recognize – for example, faces (like
    eigenfaces) – be trained to instead recognize blocks of data that
    look the same, despite perhaps being in vastly dissimilar fields?

    Apologies if I'm intruding, or seem to be "out of my lane"… a
    popular buzzword these days.

    Dave – LONG time lurker…

    *From:* [email protected] <mailto:[email protected]>
    <[email protected] <mailto:[email protected]>> *On
    Behalf Of *Linas Vepstas
    *Sent:* Saturday, July 18, 2020 6:54 PM
    *To:* link-grammar <[email protected]
    <mailto:[email protected]>>; opencog
    <[email protected] <mailto:[email protected]>>
    *Subject:* [opencog-dev] Re: [Link Grammar] Sutton's bitter lesson

    Well yes. What's truly remarkable is how frequently that lesson
    has to be re-learned. There are vast swaths of the AI industry
    that still have not learned it, and are deluding themselves into
    thinking that they've made bold progress, when they've gotten
    nowhere at all, and seem blithely unaware that they are repeating
    the same mistake... again.

    I refer, of course, to the deep-learning true-believers. They have
    made the fundamental mistake of thinking that their various
    network designs provide an adequate representation of reality. 
    How little do they seem to realize that all that code, running
    hand-tuned on some GPU is just, and I quote Sutton, here:
    "leveraged human understanding of the special structure of chess".
    Except, cross out "chess" and replace with "dimensional reduction"
    or "weight vector" or whatever buzzword-bingo is popular in the
    deep-learning field these days.

    I'm back again to insisting that "patterns matter". If you can't
    spot the pattern, you've not accomplished anything. Neural nets
    can't spot patterns. They're certainly interesting for various
    reasons, but, as an AGI technology, they are every bit a dead-end
    as the hand-crafted English link-grammar dictionary.

    This is one reason I'm sort of plinking away, working on
    unfashionable things. I'm thinking simply that they are more
    generic. and more powerful.  But perhaps the problem is recursive:
    perhaps I'm just "leveraging my human understanding of the special
    structure of patterns", and will hit a wall someday.  For now, it
    seems that my wall is more distant.  If only I could convince
    others ...

    --linas

    On Sat, Jul 18, 2020 at 5:14 PM Paul McQuesten
    <[email protected] <mailto:[email protected]>> wrote:

        Linas,

        I think this reinforces your view of learning from data,
        instead of adding more human-curated rules:

        http://incompleteideas.net/IncIdeas/BitterLesson.html

-- You received this message because you are subscribed to the
        Google Groups "link-grammar" group.
        To unsubscribe from this group and stop receiving emails from
        it, send an email to [email protected]
        <mailto:[email protected]>.
        To view this discussion on the web visit
        
https://groups.google.com/d/msgid/link-grammar/464d1f92-00b7-4780-870a-2156229b4567o%40googlegroups.com
        
<https://groups.google.com/d/msgid/link-grammar/464d1f92-00b7-4780-870a-2156229b4567o%40googlegroups.com?utm_medium=email&utm_source=footer>.



--
    Verbogeny is one of the pleasurettes of a creatific thinkerizer.
            --Peter da Silva

-- You received this message because you are subscribed to the Google
    Groups "opencog" group.
    To unsubscribe from this group and stop receiving emails from it,
    send an email to [email protected]
    <mailto:[email protected]>.
    To view this discussion on the web visit
    
https://groups.google.com/d/msgid/opencog/CAHrUA36x8QBXGUg4f9BMw5StdhRu1WFjFr_9ySo_vZesMeZrTA%40mail.gmail.com
    
<https://groups.google.com/d/msgid/opencog/CAHrUA36x8QBXGUg4f9BMw5StdhRu1WFjFr_9ySo_vZesMeZrTA%40mail.gmail.com?utm_medium=email&utm_source=footer>.

-- You received this message because you are subscribed to the Google
    Groups "opencog" group.
    To unsubscribe from this group and stop receiving emails from it,
    send an email to [email protected]
    <mailto:[email protected]>.
    To view this discussion on the web visit
    
https://groups.google.com/d/msgid/opencog/002701d65d5d%244fdc07d0%24ef941770%24%40xanatos.com
    
<https://groups.google.com/d/msgid/opencog/002701d65d5d%244fdc07d0%24ef941770%24%40xanatos.com?utm_medium=email&utm_source=footer>.



--
Verbogeny is one of the pleasurettes of a creatific thinkerizer.
        --Peter da Silva

--
You received this message because you are subscribed to the Google Groups "link-grammar" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected] <mailto:[email protected]>. To view this discussion on the web visit https://groups.google.com/d/msgid/link-grammar/CAHrUA368NNEmjBsg_09%3DG7yJOdT2Ur%3DBMYvEZPFh2k_HiWNx7w%40mail.gmail.com <https://groups.google.com/d/msgid/link-grammar/CAHrUA368NNEmjBsg_09%3DG7yJOdT2Ur%3DBMYvEZPFh2k_HiWNx7w%40mail.gmail.com?utm_medium=email&utm_source=footer>.

--
Richard Hudson (dickhudson.com)



--
This email has been checked for viruses by AVG.
https://www.avg.com

--
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/98a8592b-53a1-08d7-388c-d464eab764da%40ucl.ac.uk.

Reply via email to