Rob. I'm referring to contextualization as general context management
within complex systems management. As an ontology. The application of which
has relevance for knowledge graphs, LLMs, and other knowledge-based
representations. Your quotation: "Contextualization ... in LLM systemic
hierarchies.", is incorrect:  I stated: "To increase contextualization and
instill robustness in the LLM systemic hierarchies." I'm asserting that the
suitably-completed, context-management theory could be applied to LLM
systemic hierarchies to instill robustness (schemas of hierarchical
control). This should've been apparent from the remainder of my message.

I made the assumption that we were discussing this in context of LLMs, not
stating that LLM theory was completed 17 years ago,. I have no idea if, and
when it was completed.

My point remains, if you're busy working on context management theory for
complex adaptive systems - as you seem to be implying - such as trying to
engineer fully recursive (in the sense of machine learning) functionality
within LLMs, then this would have direct relevance. Furthermore, I'm
asserting how that foundational work's been completed since 2007 (in the
sense of being ready for application, even to modern-day constructor theory
and LLMs). For example, why debate "ambiguity", when that was elegantly
resolved by such an ontology?

I understand your reference elsewhere as to embedding knowledge via a
number schema, but I recognize some telecom-industry thinking in the
numbering system. Yes, ambiguity would eventually result from that. Not
only from a schema perspective, but also from a multilingual perspective.
Such complexity may not be necessary. If instead of sentence rules
(grammatical constructs), policy of language (governing laws) would be
specified and extracted from domain-specific knowledge, it would matter
more as to what context the words were being used in (as comprehension),
than the actual words being used (sentences). Surely, it must depend on
what LLMs are being purposed for.

In the preceding scenario of use, embedding could then be accurately
understood and referenced via an emergent structure of a context. I think
you were discussing something along those lines, wondering if this may
occur naturally. I can tell you with conviction, I have years of experience
specifying contexts for a large sample of knowledge contexts (domains).
Given a consistent set of algorithms, structure always emerge from full
normalization (optimal knowledge maturation and integration).

My thought was, if any such a soft-systems ontology would be married up
with LLMs (there are a few of these soft system methodologies about), it
may creatively resolve many of the constraints inherent in linear
progression. I think this approach could help fast track developments
towards AGI. In addition, it would probably satisfy Checkland's definition
of emergence, which is an outcome of a "debate" (systemic interaction)
between linear and alinear systems.

For Japan, consider their GRAPE supercomputer architecture. Maybe they just
play their AI cards closer to their chest. Then, there's their robot,
Honda's Asimo (2000). It was the first robot in the world to exhibit
human-friendly functionality. I think it once kicked ball with Obama,
verbalizing cognition of who Obama was. I find it strange that there's an
Internet-based claim that it was retired in 2000. I still watched a video
of a new version of it a few years ago, following its development as much
as public info would allow for. Last version I recall was where it had
achieved companion-like functionality to be of assistance to humans, e.g.,
holding hands while walking and remaining fully conversational, especially
as a lobby assistant, supporting the elderly, and a doggy version
befriending and tutoring children. As far as AGI in Japan is concerned, I'm
also aware of specific research to replicate humans in looks, sound,
movement, personality and reasoning, and to assume human-like, interactive
gender roles. Inter alia, Japan's effectively producing societal AI
products.

For some of us, the notion of capturing the human soul on a chip,
represents a fascinating journey. I think it should form part of the scope
of future-AGI developments.

Let's, for a moment, reflect on how to place LLMs in context of such
developments.



On Mon, Jun 17, 2024 at 1:10 PM Rob Freeman <[email protected]>
wrote:

> On Mon, Jun 17, 2024 at 3:22 PM Quan Tesla <[email protected]> wrote:
> >
> > Rob, basically you're reiterating what I've been saying here all along.
> To increase contextualization and instill robustness in the LLM systemic
> hierarchies. Further, that it seems to be critically lacking within current
> approaches.
> >
> > However, I think this is fast changing, and soon enough, I expect
> breakthroughs in this regard. Neural linking could be one of those
> solutions.
> >
> > While it may not be exactly the same as your hypothesis (?), is it
> because it's part of your PhD that you're not willing to acknowledge that
> this theoretical work may have been completed by another researcher more
> than 17 years ago, even submitted for review and subsequently approved? The
> market, especially Japan, grabbed this research as fast as they could. It's
> the West that turned out to be all "snooty" about its meaningfulness, yet,
> it was the West that reviewed and approved of it. Instead of serious
> collaboration, is research not perhaps being hamstrung by the NIH (Not
> Invented Here) syndrome, acting like a stuck handbrake?
> 
> You intrigue me. "Contextualization ... in LLM systemic hierarchies"
> was completed and approved 17 years ago?
> 
> "Contextualization" is a pretty broad word. I think the fact that
> Bengio retreated to distributed representation with "Neural Language
> Models" around... 2003(?) might be seen as one acceptance of... if not
> contextualization, at least indeterminacy (I see Bengio refers to "the
> curse of dimensionality".) But I see nothing about structure until
> Coecke et co. around 2007. And even they (and antecedents going back
> to the early '90s with Smolensky?) I'm increasingly appreciating seem
> trapped in their tensor formalisms.
> 
> The Bengio thread, if it went anywhere, it stayed stuck on structure
> until deep learning rescued it with LSTM. And then "attention".
> 
> Anyway, the influence of Coecke seems to be tiny. And basically
> mis-construed. I think Linas Vepstas followed it, but only saw
> encouragement to seek other mathematical abstractions of grammar. And
> OpenCog wasted a decade trying to learn those grammars.
> 
> Otherwise, I've been pretty clear that I think there are hints to what
> I'm arguing in linguistics and maths going back decades, and in
> philosophy going back centuries. The linguistics ones specifically
> ignored by machine learning.
> 
> But that any of this, or anything like it was "grabbed ... as fast as
> they could" by the market in Japan, is a puzzle to me (17 years ago?
> Specifically 17?)
> 
> As is the idea that the West failed to use it, even having "reviewed
> and approved it", because it was "snooty" about... Japan's market
> having grabbed it first?
> 
> Sadly Japanese research in AI, to my knowledge, has been dead since
> their big push in the 1980s. Dead, right through their "lost" economic
> decades. I met the same team I knew working on symbolic machine
> translation grammars 1989-91, at a conference in China in 2002, and as
> far as I know they were still working on refinements to the same
> symbolic grammar. 10 more years. Same team. Same tech. Just one of the
> 係長 had become 課長.
> 
> What is this event from 17 years ago?

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M2ddd0fda70eadc85cabb77c4
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to