Strange that you didn't reference Schank and conceptual dependency theory
(1975) which appeared to be quite successful at representing huge amounts of
human knowledge with a very small number of semantic primitives - and his was
an AI effort, not a linguistic approach.
My main objection to following Ben's research direction is that Ben thinks too
much like a mathematician/physicist where a small number of axioms/theoretical
physics concepts has been shown to be able to serve as the foundational
building blocks for all maths/reality built on top of it. And, indeed, for a
very long time I was in that same camp/conviction because this 'semantic
primitives' thinking is extremely seductive to us reductionists. However, I
think the very essence of knowledge (or the real word's bewildering
complexity=richness) is that we don't combine the axioms (or semantic
primitives) willy-nilly but precisely select/create a very few of the many
possible combinations to allow us to navigate/manipulate the real (complex)
world successfully with limited (computing/thinking) resources. So while it is
perfectly possible to deconstruct mathematical or semantic concepts into
primitives, for *thinking *(reasoning) purposes it make much more sense to use
the higher-level concepts, and *that is exactly what knowledge or conceptual
thinking or intelligence *is about. If your system cannot chunk but works by
decomposing into primitives IMHO you will never be able to reason at human
level intelligence.
A very simplistic example are colours: it's perfectly feasible to decompose all
colours into 3 basic colours (RGB) (or 4 for CMYK or whatever) and doing that
can be useful for *some *computations but it IMHO it is NOT the best way to
reason *semantically *about colours because green is qualitatively not 50% blue
and 50% yellow (for us). Similarly, you don't want to do logic using the sole
sufficient logical operator NOR operator but are better off using at least 3
(OR, AND and NOT).
To me, it is still very worthwhile to work from or think in terms of semantic
primitives, but mainly from a development educational perspective ... first
teach the elemental concepts and then slowly build up bigger structures of more
complex thinking. And then, like in maths or physics, it may be better to start
not with the smallest set of possible primitives, but a useful 'mid-level' set
e.g. 7 colours instead of 3; 3 logical operands (OR, AND, NOT) instead of 1
(NOR). But you definitely don't want to think about or compute the physical
world by handling strings/knots (string theory), quarks/fields ('classic'
quantum physics) or Wolfram's hypergraphs or do maths by thinking in axioms.
However, I've played around with semantic primitives as well and they range
from Shank's very minimal set (was it 11?) to various versions of Wierzbicka's
to larger sets of 50-150 to various versions of basic english and simple
english. What I personally learnt from that is that, if you really want to work
operationally with a set of semantic primitives, I'd probably go with between
500 and 1500 concepts and follow the lines of restricted vocabularies
/controlled languages(Ogden's Basic English) which is what typical works for
simple wikipedia (note that many 'basic' words are semantically heavily
overloaded - hyponyms, especially verbs and propositions), or word vector
models.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T0f3dcf7070b3a18e-M0fddcf85a9bc4a2934c3fcb6
Delivery options: https://agi.topicbox.com/groups/agi/subscription