Jean-Paul,

On Tue, Mar 15, 2022 at 1:42 PM Jean-Paul VanBelle via AGI <
[email protected]> wrote:

> Strange that you didn't reference Schank and conceptual dependency theory
> (1975) which appeared to be quite successful at representing huge amounts
> of human knowledge with a very small number of semantic primitives - and
> his was an AI effort, not a linguistic approach.
>

If I attempted a comprehensive summary of the search for semantic
primitives I'd need not a book but a library!

The search within linguistics alone is voluminous.

And I actually think the search within linguistics is arguably more
interesting than a specifically AI directed effort. That's because I would
argue language has a unique relation to thought.  In language alone are the
processes of thought arguably made physical.

Mathematics perhaps makes products of thought physical. But not necessarily
the processes.

So language offers us a unique opportunity and artifact.

But the search for primitives within language alone is voluminous.

And even the break into a specific search for semantic primitives (as
opposed to structural or functional primitives...) has resulted in at least
4 or 5 different branches. (That break being called the Linguistic Wars
BTW, and has indeed merited at least one book.)

To digress, 'cos just looking at the forms of these things is fascinating.
Like a kaleidoscope. Just to sketch some of the menagerie people have
resolved for semantic shapes within linguistics. For curiosity's sake, I
quite like Leonard Talmy's branch. I've always been fascinated by his
categorization of languages into satellite-framed vs. verb-framed languages
for a distinct mental difference between languages in terms of perceived
"primitives'', exemplified by English which refers to actions mostly in
terms of the manner of acting, specified as an actual action only by the
addition of a "satellite". So, in English you can "roll down" a hill, where
in French you must "descend" and specify "rolling" separately. And his
analysis of an emphasis in Native American languages on an equation between
action and actor, so an action always actually IS its actor, with rare
examples in English like "rain" (What does rain do? Rain.)

On a more substantive note with particular relevance to Talmy, and showing
how without Chomsky the field might have branched into exploring the
dynamical systems implications of problems with learning procedures, you
can read Wolfgang Wildgen's "Dynamic Turn" which specifically relates
aspects of a specific relating of Talmy's Force Dynamic search for
primitives, to a need for a re-appraisal of meaning in terms of dynamical
systems:

Wildgen: The "dynamic turn" in cognitive linguistics
https://varieng.helsinki.fi/series/volumes/03/wildgen/index.html

There are all sorts of fascinating "primitive" patterns you can resolve
depending on the lens you use.

Lakoff is another branch of that search for semantic primitives. While he's
built beautiful analyses of hierarchies of meaning in metaphor, to my last
knowledge he was seeking semantic primitives at the level of the neuron. So
his only "primitives" end up being actual embodiments. Something like the
idea that the "primitive" is the world.

There's others: cognitive, frame, embodied...

And that's just the branch of linguistics which sought primitives for
meaning specifically. The search for primitives in linguistics has ranged
far and wide. You have an entire branch of linguistics which sought
primitives in function, not action: the why, not the what.

Chomsky himself sallied on looking for primitives in structure, not
meaning, and not function. A search which had many "primitive" iterations:
transformational, principles & parameters, minimalist.

You might say Chomsky was really THE great PRIMITIVES guy in linguistics.

And this is again the interesting conversation. Because without Chomsky it
is interesting to imagine how things might have gone. Better maybe. Because
what is interesting about Chomsky is that he pointed out that IF primitives
exist, they MUST be innate, BECAUSE those that are observed CONTRADICT.

That's a big thing. He doesn't get enough credit for that. We've forgotten
it. After a while machine learning came back. But forgot it.

In practice it's forgotten. Though Chomsky does pop up from time to time
insisting that MACHINE LEARNING CANNOT WORK. By which he still means
machine learning of primitives, of course, because that is all anyone ever
expects to find.

But it's poignant. I don't know whether to laud him or lament him for it.
Because before him the path of linguistics was, essentially, machine
learning. This is the 1950s. Then Chomsky comes along with this observation
that machine learning of language structure results in contradictions.

Faced with the observation that machine learning led to contradictions, the
field might have gone either way. Had we been more familiar with complex
systems at that time, people might have simply concluded the system was
likely chaotic, abandoned primitives and stormed on modeling meaning as a
dynamical system. And we might have solved AI by now! Instead, the idea of
primitives was too strong.

Chomsky deserves credit for pointing out the problem. But perhaps he had
too strong a personality. And he drove the field wrongly in his own
direction. And his interpretation was that, yes, primitives exist, but they
must be innate. So instead of embracing complex systems the field instead
fractured seeking all kinds of hidden primitives: in structure, function,
and yes, meaning.

And we're still stuck there today.

So yeah, so fascinating history to look back on, if you've a mind to.

My main objection to following Ben's research direction is that Ben thinks
> too much like a mathematician/physicist where a small number of
> axioms/theoretical physics concepts has been shown to be able to serve as
> the foundational building blocks for all maths/reality built on top of it.
>

For sure. His first training is in mathematics, and everything must have a
mathematical formulation. So he's performing outrages on meaning, to make
it fit the mathematical formulation he's prescribed beforehand.

Though he can be fairly eclectic in that. So he did embrace chaos to
consider chaotic logic in the '90s, for instance. Arguably still trying to
squeeze round pegs into a logic/maths shaped square hole, but ranging
fairly eclectically to do it.

I tend more to the view that mathematics is just another product of
cognition. Useful of course, just as all of cognition itself is useful. But
likely partial, like all meaning, and something to be explained, not an
explanation in itself.

Like I say, I think language is quite a good guide in that. As externalized
artifacts for the system of thought, not just its products.

And, indeed, for a very long time I was in that same camp/conviction
> because this 'semantic primitives' thinking is extremely seductive to us
> reductionists. However, I think the very essence of knowledge (or the real
> word's bewildering complexity=richness) is that we don't combine the axioms
> (or semantic primitives) willy-nilly but precisely select/create a very few
> of the many possible combinations to allow us to navigate/manipulate the
> real (complex) world successfully with limited (computing/thinking)
> resources. So while it is perfectly possible to deconstruct mathematical or
> semantic concepts into primitives, for *thinking *(reasoning) purposes it
> make much more sense to use the higher-level concepts, and *that is
> exactly what knowledge or conceptual thinking or intelligence *is about.
> If your system cannot chunk but works by decomposing into primitives IMHO
> you will never be able to reason at human level  intelligence.
>

It would probably take us a while to even agree on the words you are using
there.

What is a "primitive"? What is a "chunk"?

I think there are all kinds of systematizations which can be found. Which
systematizations might be thought "primitive". But they will all be partial.

So we might argue if what you are calling a primitive should really be
called a primitive, or is better called a systematization. But
systematization or primitive, or just plain observation, I might agree with
you that not the elements, but the way elements can be put together is what
is important. If that's what you mean by "chunk".


> A very simplistic example are colours: it's perfectly feasible to
> decompose all colours into 3 basic colours (RGB) (or 4 for CMYK or
> whatever) and doing that can be useful for *some *computations but it
> IMHO it is NOT the best way to reason *semantically *about colours
> because green is qualitatively not 50% blue and 50% yellow (for us).
> Similarly, you don't want to do logic using the sole sufficient logical
> operator NOR operator but are better off using at least 3 (OR, AND and NOT).
>

Well, I don't know if colour is a good example, because I understand
there's a fairly clear physical basis for primitives in an actual
three physical types of cones.

Now, if we found "action" and "object" cones in the eye, then you might
have a basis to start building a primitive theory of meaning!

Perhaps that's the direction Lakoff ended up in. But basically making every
unique neuron a "primitive". So the "primitive" is the totality!


> To me, it is still very worthwhile to work from or think in terms of
> semantic primitives, but mainly from a development educational perspective
> ... first teach the elemental concepts and then slowly build up bigger
> structures of more complex thinking. And then, like in maths or physics, it
> may be better to start not with the smallest set of possible primitives,
> but a useful 'mid-level' set e.g. 7 colours instead of 3; 3 logical
> operands (OR, AND, NOT) instead of 1 (NOR). But you definitely don't want
> to think about or compute the physical world by handling strings/knots
> (string theory), quarks/fields ('classic' quantum physics) or Wolfram's
> hypergraphs or do maths by thinking in axioms.
>

What you want to think about, is a different thing from how your brain does
its thinking.


> However, I've played around with semantic primitives as well and they
> range from Shank's very minimal set (was it 11?) to various versions of
> Wierzbicka's to larger sets of 50-150 to various versions of basic english
> and simple english. What I personally learnt from that is that, if you
> really want to work operationally with a set of semantic primitives, I'd
> probably go with between 500 and 1500 concepts and follow the lines of
> restricted vocabularies /controlled languages(Ogden's Basic English)  which
> is what typical works for simple wikipedia (note that many 'basic' words
> are semantically heavily overloaded - hyponyms, especially verbs and
> propositions), or word vector models.
>

Well, you've lost me there. There is maybe something in the way you believe
elements should be put together. But my basic line is that I don't think
primitives are a useful direction to take.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0f3dcf7070b3a18e-M2e49a4645fbda3b0db26aabb
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to