On Sun, Feb 24, 2019 at 6:34 PM Rob Freeman <[email protected]>
wrote:

>
> Where I see the gap might be summarized with one statement by Linas:
>
> LV, Feb 9: "If you are picky, you can have several different kinds of
> common nouns, but however picky you are, you need fewer classes than there
> are words."
>
> "...however picky you are, you need fewer classes than there are words..."
>
> "Fewer classes"?
>
> How do we know that is true?
>

Quoting out of context is dangerous, and can lead to misunderstanding. We
know its true because Noah Webster pointed it out in 1806 and pretty much
everyone has agreed with him, ever since.


> Experimentally, that language "gauge" is not pointless, was observed in
> the results for the distributional learning of phonemes I cited, dating
> back 60 years, to the time Chomsky used it to destroy distributional
> learning as the central paradigm for linguistics.
>

I doubt that the use of the word "gauge" there has anything at all to do
with the word "gauge" of "gauge theory".   In physics, "gauge" has a very
specific and precise meaning; I suspect that there's an abuse of that word,
here.


> Sergio Pissanetzky comes close to this, with his analysis in terms of
permutations, also resulting in a network:

> "Structural Emergence in Partially Ordered Sets is the Key to
Intelligence"
http://sergio.pissanetzky.com/Publications/AGI2011.pdf

I'll look. Is it actually that good?


>
>   Currently they are stumbling at the "clustering" step'
>
> Sure they will!
>

It's more mundane than that. They haven't bothered with some basic steps of
data analysis. Getting them to actually filter out the junk from the valid
data is the proverbial "like pulling teeth", they don't want to do it and
insist that it'll be fine but then complain about the results.  It's
ordinary lab science, repeated around the world dozens of times a day:
experiments don't work because of contaminants and interference.


>
>
> Looking up what you are doing with your "MST" parser. You start by
> clustering links, starting with mutual information.
>

It's just one lens in a microscope assembly. It does a certain task, it
does it OK-ish, what matters is it's role in the entire assembly, and not
as a stand-alone component. Too many people are too obsessed about it as a
stand-alone component.


>
>
> OK, you may be right that BERT models may have an advantage that Linas
> doesn't see, because deep nets do allow some recombination of parts, within
> a network layer, at run time.
>

I also haven't studied BERT. What's more important, BERT or Pissanetsky?

--linas

>
>

-- 
cassette tapes - analog TV - film cameras - you

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta6fce6a7b640886a-Mca1d69e229a473c2d04fe098
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to