On Sat, Oct 30, 2021 at 2:47 AM Amirouche Boubekki <
[email protected]> wrote:

> Le sam. 30 oct. 2021 à 06:17, Linas Vepstas <[email protected]> a
> écrit :
> >
> > Hi!
> >
> > The slide deck that I presented is available at
> >
> >
> https://github.com/opencog/learn/blob/master/learn-lang-diary/recognizing-patterns.pdf
> >
> > and a transcript of what I was going to say is at
> >
> >
> https://github.com/opencog/learn/blob/master/learn-lang-diary/recognizing-patterns-notes
>
> Very interesting. What are those acronyms:
>
> - MI = Mutual Information?
>

Yes.

- MST parses = Maximum Spanning Tree, according to wikipedia: a
> spanning tree is "In the mathematical field of graph theory, a
> spanning tree T of an undirected graph G is a subgraph that is a tree
> which includes all of the vertices of G.", the maximum spanning tree
> will be the spanning tree that goes through most edges or vertices. It
> looks similar to a space filling curve somehow, except it is
> structured.
>

Yes.

- GUE
>

Gaussian Unitary Ensemble.  It's complicated. Ignore it.

I am wondering why the algorithm only takes into account adjacent word
> pairs.


Which algorithm? Not mine. Nowhere does it say "adjacent".

Unlike Link Grammar that draws connections across a sentence
> jumping through intermediate words... Oops! Then you mention
> skip-grams (https://en.wikipedia.org/wiki/N-gram#Skip-gram) so my
> guess, unlike what is written in Combinatory Linguistics by Cem
> Bozşahin, that stress the need to build a phrase structure grammar
> with adjacent words
> (https://en.wikipedia.org/wiki/Phrase_structure_grammar) vs. a
> dependency grammar (https://en.wikipedia.org/wiki/Dependency_grammar)
> but that is a categorical grammar? It is unclear to me what is what,
> and whether that matters.
>

Given a dependency grammar, one can algorithmically convert it to a
phrase-structure grammar.  To a combinatory grammar, to a categorial
grammar. These are all equivalent formulations of the same concepts.  Now,
linguists will argue strongly about this, as they all have their favorite
ideas. From where I am, none of these arguments matter very much, as all
these systems are inter-convertible.

What does matter then, for me, is
* how small is the representation?
* Is it easy to write algorithms that manipulate it?
* Are those algorithms efficient and fast?

Taking those into account, a dependency grammar, using the
jigsaw-puzzle-piece paradigm, appears to be the simplest approach.

Given a lexis of jigsaw pieces, it can be converted to a phrase structure
grammar or a combinatorial grammar or whatever, but I currently do not see
the utility of performing those conversions.


> Quoting the transcript:
> > * We can learn the rules of reasoning; they are not God-given (aka
> > hard-coded by some programmer.)
> > * They can be learned, and I've described an algorithm for learning
> > them.
>
> Awesome.
>
> To summarize the presentation: you claim that it is possible with a
> Machine Learning algorithm to build, in a completely *unsupervised*
> way, that is without annotations, by mining existing corpus materials,
> a grammar for natural languages, hence creating links between the
> words forming a tree or graph.


Yes. That project was started circa 2014 and finally worked "acceptably
well" circa 2017. Where the bar for "acceptability" was set rather low.
I've made many improvements since then; it's an ongoing project.

That graph is annotated somehow with
> words, hence is explainable.


Uhh, that the algorithm can be applied to obtain the references between
words in text and objects in images, or patterns in audio, so that the when
someone says "I hear whistling in the distance", the word "whisteling" can
be associated with a particular collection of audio-processing filters,
that an audio digital-signal processing expert would recognize as filters
the select for a whistling sound. Thus, the word "whistling" is grounded in
a particular set of audio filters that select for whistling.


> You also claim the algorithm may be used
> to infer grammars from other sources such as audio, video, etc...


Yes.

You
> also claim that it is a very simple, walked path in terms of math,
> already used in the industry.


No, I do not. Or, rather, I use a collection of concepts that are
relatively well known to those who are versed in the state of the art, but
these concepts remain confusing and generally misunderstood by many.

The situation I find myself in is kind of claiming that, in a vacuum, a
cannonball and a feather will drop at the same rate, when common-sense
experience clearly contradicts that. There are many people who will argue
this, and argue details both large and small. It is difficult to have a
meaningful conversation, due to the overall confusion about the situation.

As far as I understand you joined the
> dots, but there are still known unknowns


There are always unknowns. If you've built a steam engine, or a glider, or
a vacuum tube, there are unknowns. There are ways to make them better, more
efficient, bigger, smaller, cheaper, faster.


> such as a normal distribution
> that appears out-of-the-blue.
>

Sure. I have made hundreds of different graphs of distributions of all
sorts of variables, plotted in all kinds of different relationships. The
directory in which the presentation slides are found also contains other
PDF's, and a diary of research results, showing such figures.

The point is that some of these figures are sort-of "obvious" -- Zipfian
distributions, and so on. Others are utterly unexplained.  Here's one on
wikipedia: to the best of my knowledge, there is no scientific explanation
whatsoever, of this graph:

https://en.wikipedia.org/wiki/Wikipedia:Does_Wikipedia_traffic_obey_Zipf%27s_law%3F

I did not create the original wikipedia page, but I did create the January
2020 update to it.  I have observed exactly the same graph in genome
distribution, and in proteome distribution, and in reactome distribution.
(I've placed a PDF in github somewhere with those graphs)

Again: I am not aware of any theoretical explanation of any of these
graphs, either of the wikipedia hits, or the genome distribution, or the
distributions I observe in natural language. It appears to be a completely
open and utterly unexplored corner of network theory.

I think it has something to do with gaussian unitary ensembles. But that is
an extremely vague and incomplete thought, at this time.


> In simpler words, you shed light (structures) into the void (the
> unstructured).
>

Yes.

>
> Let me know if I got this correctly.
>
> Thanks for sharing.
>

Welcome!

-- 
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA36WJYGEKo-A5Kj1eGAHvKfk_CJEokWN%2BqnOoJW8dv5ZeA%40mail.gmail.com.

Reply via email to