Whether and in what sense semantic primitives can be found depends wholly on the definitions involved right?
Crudely, define ps(p,e) as the number of primitives that is needed to generate p% of human concepts within error e Then the question becomes how does ps(p,e) grow as p grows and e shrinks Or if you want a single variable look at r= p*(1-e) Let ps*(n) denote the inverse of ps, i.e. it tells you what r you get for a given (optimally chosen) selection of n primitives One hypothesis would be that ps* has a sigmoid shape as n increases... but then the question is where does the inflection point lie... if n is not too big at the inflection point then perhaps a relatively small set of primitives can get you above a certain threshold of coverage, and then to cover everything else will require a huge fat tail of phenomenal primitives etc. Anyway this is the sort of issue that's IMO meaningful to dig into... overly strong statements like "all human concepts can be precisely derived as combinations of these 50 primitives" are clearly false and not interesting Chalmers in "Construction of the World" deals w/ these sorts of definitional matters with insane and excessive nuance ;) ben On Mon, Mar 14, 2022 at 9:38 AM Rob Freeman <[email protected]> wrote: > > In my presentation at AGI-21 last year I argued that semantic primitives > could not be found. That in fact "meaning", most evidently by the historical > best metrics from linguistics, appears to display a kind of quantum > indeterminacy: > > Vector Parser - Cognition a compression or expansion of the world? - AGI-21 > Contributed Talks > https://youtu.be/0FmOblTl26Q > > I said I was glad that this no appeared to no longer be an entirely cracked > suggestion, and that finally others were commenting along similar lines. For > example, Bob Coecke: > > From quantum foundations via natural language meaning to a theory of > everything > Bob Coecke > https://arxiv.org/abs/1602.07618 > > I even cited comments by Linas Vepstas within OpenCog as finally recognizing > issues along these lines: > > Vepstas, “Mereology”, 2020: "In the remaining chapters, the sheaf > construction will be used as a tool to create A(G)I representations o > f reality. Whether the constructed network is an accurate representation of > reality is undecidable, and this is true even in a narrow, > formal, sense." > > On Mon, Mar 14, 2022 at 2:37 PM Ben Goertzel <[email protected]> wrote: >> >> The next couple AGI Discussion Forum sessions: >> >> https://wiki.opencog.org/w/AGI_Discussion_Forum#Sessions >> >> March 18, 2022, 7AM-8:30AM Pacific time: Ben Goertzel leading >> discussion on semantic primitives , >> https://singularitynet.zoom.us/my/benbot . Background: >> https://bengoertzel.substack.com/p/can-all-human-concepts-be-reduced?s=w >> >> April 8, 2022, 7AM-8:30AM Pacific time: Jonathan Warrell on "A >> meta-probabilistic-programming language for bisimulation of >> probabilistic and non-well-founded type systems" (aka an elegant >> general math formulation underlying MeTTa language) ... >> https://singularitynet.zoom.us/my/benbot . Background material: To be >> posted >> >> -- ben >> >> -- >> Ben Goertzel, PhD >> [email protected] >> >> "My humanity is a constant self-overcoming" -- Friedrich Nietzsche > > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink -- Ben Goertzel, PhD [email protected] "My humanity is a constant self-overcoming" -- Friedrich Nietzsche ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T0f3dcf7070b3a18e-M5fb285e5fa6e6ca4be99b9cd Delivery options: https://agi.topicbox.com/groups/agi/subscription
