I appreciate your response.  Honestly, I have been finding that as of late, I 
have been coming up against these "definition" issues with greater frequency.

 

I don't claim to be an expert.  I'm probably – mostly – an idiot in this field, 
but I did figure out a way to discern if the utterance of a speaker (human) is 
a question or a statement, with 98.4% accuracy.  It even can detect an indirect 
question as well as a direct question.

 

I found … a pattern – in English speech, and I coded it.  

 

Can my robots *answer* the question when it is detected – mostly no.  But they 
can identify the utterance better than anything I've tried from any other 
source.

 

Beyond this, my last message to you regards recognizing patterns in data, the 
way scripts recognize patterns in an image.  Often they are all just three 
dimensional matrices. 

 

For example, would it be possible to "train" (hold on here… lol) a net to 
recognize a given data pattern, then have it look at different databases/data 
lakes/wads of random data… and recognize if that particular data pattern 
existed in those other regions?

 

I apologize if I'm oversimplifying things.  I'm imagining that data structures 
would share a common architecture across disparate fields, and may be 
recognized in this manner.

 

Again, I may be overstepping my experience, and I apologize for wasting your 
time if so.   I have had some very rewarding experiences with language 
parsing/understanding based on coding for patterns that were simply determined 
from my own neurology/intuitions as a native speaker of the language I code in 
(English).  My intuitions tell me that there are possibly ways to view data in 
the same way as we view images, and to recognize a "data image" in much the 
same way.

 

You are correct, I believe, that neural nets can't spot patterns.  But humans 
can.  That seems to be one thig we do really well.  But I believe if we feed 
neural nets examples of patterns, instead of things – can we come up with 
something new?

 

I wish I had better words here.  While I can see the structures I am referring 
to, I can't seem to really articulate them…

 

Feel free to tell me to go back to Comp Sci 101 if I am not offering anything 
here, I won't be offended 😊

 

I love what you are all doing here.  I spend a lot of time imagining cognitive 
architectures and methods of creating something akin to genuine "understanding" 
in a coded base….  

 

Feel free to tell me to shut up and leave you alone 😊

 

Dave

 

 

 

From: [email protected] <[email protected]> On Behalf Of Linas 
Vepstas
Sent: Saturday, July 18, 2020 8:26 PM
To: opencog <[email protected]>
Cc: link-grammar <[email protected]>
Subject: Re: [opencog-dev] Re: [Link Grammar] Sutton's bitter lesson

 

The word "training" is problematic. If you mean "memorize an association list 
of pairs" (e.g. faces+text-string) well, technically that is "training" in the 
AI jargon file,  but it's of little utility for AGI. 

 

The word "pattern" is problematic. Exactly what a "pattern" is, is ... tricky. 
Much (most? almost all?) of my effort is about trying to define "what  is a 
pattern, anyway". I'm not sure what you had in mind, when you used that word.  
(Its a tricky word. Everyone obviously knows what it means, but how to turn it 
into an algorithmically graspable "thing"?)

 

--linas

 

On Sat, Jul 18, 2020 at 6:44 PM Dave Xanatos <[email protected] 
<mailto:[email protected]> > wrote:

"If you can't spot the pattern, you've not accomplished anything."

 

Every significant – and truly useful - advance I've made on my own language 
apprehension code has been based on recognizing a pattern, and coding for it.  
I fully agree.

 

Can a neural network be trained on patterns instead of things?

 

Can code designed to recognize – for example, faces (like eigenfaces) – be 
trained to instead recognize blocks of data that look the same, despite perhaps 
being in vastly dissimilar fields?

 

Apologies if I'm intruding, or seem to be "out of my lane"… a popular buzzword 
these days.

 

Dave – LONG time lurker…

 

 

 

From: [email protected] <mailto:[email protected]>  
<[email protected] <mailto:[email protected]> > On Behalf Of 
Linas Vepstas
Sent: Saturday, July 18, 2020 6:54 PM
To: link-grammar <[email protected] 
<mailto:[email protected]> >; opencog <[email protected] 
<mailto:[email protected]> >
Subject: [opencog-dev] Re: [Link Grammar] Sutton's bitter lesson

 

Well yes. What's truly remarkable is how frequently that lesson has to be 
re-learned.  There are vast swaths of the AI industry that still have not 
learned it, and are deluding themselves into thinking that they've made bold 
progress, when they've gotten nowhere at all, and seem blithely unaware that 
they are repeating the same mistake... again.

 

I refer, of course, to the deep-learning true-believers. They have made the 
fundamental mistake of thinking that their various network designs provide an 
adequate representation of reality.  How little do they seem to realize that 
all that code, running hand-tuned on some GPU is just, and I quote Sutton, 
here: "leveraged human understanding of the special structure of chess". 
Except, cross out "chess" and replace with "dimensional reduction" or "weight 
vector" or whatever buzzword-bingo is popular in the deep-learning field these 
days. 

 

I'm back again to insisting that "patterns matter". If you can't spot the 
pattern, you've not accomplished anything. Neural nets can't spot patterns. 
They're certainly interesting for various reasons, but, as an AGI technology, 
they are every bit a dead-end as the hand-crafted English link-grammar 
dictionary.

 

This is one reason I'm sort of plinking away, working on unfashionable things. 
I'm thinking simply that they are more generic. and more powerful.  But perhaps 
the problem is recursive: perhaps I'm just "leveraging my human understanding 
of the special structure of patterns", and will hit a wall someday.  For now, 
it seems that my wall is more distant.  If only I could convince others ...

 

--linas

 

 

On Sat, Jul 18, 2020 at 5:14 PM Paul McQuesten <[email protected] 
<mailto:[email protected]> > wrote:

Linas,

 

I think this reinforces your view of learning from data, instead of adding more 
human-curated rules:

http://incompleteideas.net/IncIdeas/BitterLesson.html

-- 
You received this message because you are subscribed to the Google Groups 
"link-grammar" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected] 
<mailto:[email protected]> .
To view this discussion on the web visit 
https://groups.google.com/d/msgid/link-grammar/464d1f92-00b7-4780-870a-2156229b4567o%40googlegroups.com
 
<https://groups.google.com/d/msgid/link-grammar/464d1f92-00b7-4780-870a-2156229b4567o%40googlegroups.com?utm_medium=email&utm_source=footer>
 .



-- 

Verbogeny is one of the pleasurettes of a creatific thinkerizer.
        --Peter da Silva

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected] 
<mailto:[email protected]> .
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA36x8QBXGUg4f9BMw5StdhRu1WFjFr_9ySo_vZesMeZrTA%40mail.gmail.com
 
<https://groups.google.com/d/msgid/opencog/CAHrUA36x8QBXGUg4f9BMw5StdhRu1WFjFr_9ySo_vZesMeZrTA%40mail.gmail.com?utm_medium=email&utm_source=footer>
 .

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected] 
<mailto:[email protected]> .
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/002701d65d5d%244fdc07d0%24ef941770%24%40xanatos.com
 
<https://groups.google.com/d/msgid/opencog/002701d65d5d%244fdc07d0%24ef941770%24%40xanatos.com?utm_medium=email&utm_source=footer>
 .



-- 

Verbogeny is one of the pleasurettes of a creatific thinkerizer.
        --Peter da Silva

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected] 
<mailto:[email protected]> .
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA368NNEmjBsg_09%3DG7yJOdT2Ur%3DBMYvEZPFh2k_HiWNx7w%40mail.gmail.com
 
<https://groups.google.com/d/msgid/opencog/CAHrUA368NNEmjBsg_09%3DG7yJOdT2Ur%3DBMYvEZPFh2k_HiWNx7w%40mail.gmail.com?utm_medium=email&utm_source=footer>
 .

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/004001d65d80%24c71a8790%24554f96b0%24%40xanatos.com.

Reply via email to