Thank you for replying Linas. I am working on a demo, I am calling it 
StoryMind. I am growing it into The Cognitive Strategy System. I wis 
wanting to bridge the systems, make them compatible. I know its early. You 
are right though, I need working code and I'm working on that now. 
Unfortunately Iam still designing, but I have a lot of help. LLMs are 
wonderful.

On Wednesday, September 24, 2025 at 6:40:42 PM UTC-4 Linas Vepstas wrote:

> I believe that the best way to find out if an idea is any good is to
> go out and try to implement it: write code for it; make it happen.
> That helps avoid some 19th-century style flapping-wing flying machine
> designs. Sounds good on paper; couldn't build one that actually flew.
>
> At the end, you ask "how to integrate with opencog?" Let me describe
> what OpenCog is -- at its heart, it is the
> https://wiki.opencog.org/w/Atomese abstract graph representation
> system. Graphs are used to represent computations. You can store those
> graphs in a database, called the https://wiki.opencog.org/w/AtomSpace
> Examples of computations are, for example,
> https://wiki.opencog.org/w/Formulas
>
> If you want to use an LLM to help you design your system, the
> https://wiki.opencog.org/w/CogServer provides MCP (Model Context
> Protocol) interfaces that allow LLM's to view, interact with and
> control the AtomSpace contents. The toolset includes an extensive
> description of what the AtomSpace is, and how to use it, in such a way
> that the LLM's clearly understand what this is all about. Works great
> with Claude!
>
> Myself, I'm doing some basic research into sensorimotor systems,
> trying to understand things "in the world out there", as opposed to
> the qualia of the "things in here" of the subjective inner experience.
> The goal is to understand "embodied cognition" or "basal cognition"
> and how consciousness arises when pieces-parts combine. This is
> super-duper low-level work, down "at the bottom of things", highly
> abstract and mathematical. It is backed by running code (so not just
> theory) but is so incredibly low-level and basic that you will find
> the actual examples underwhelming and disappointing. But I think it's
> ... well, I find it interesting. Difficult, though. Lots of
> roadblocks in every direction. Oh well.
>
> -- Linas
>
> On Wed, Sep 24, 2025 at 10:22 AM [email protected]
> <[email protected]> wrote:
> >
> > A Request for Comment: A Vision for a Strategy-Native Neural System
> > What I Mean by NLP in This Context
> >
> > When I say (NLP) neuro-linguistic programming here, I’m not speaking of 
> machine NLP, but of the older, more psychological frame that modeled how 
> humans think and act. Out of that tradition, I take a few clean and useful 
> ideas.
> >
> > Strategies: Human beings run internal programs for tasks. We switch into 
> a math strategy when solving equations, a persuasion strategy when making 
> an argument, a motivation strategy when driving ourselves forward. Each is 
> a flow of steps triggered by context.
> >
> > Modalities: Strategies draw on representational channels — visual, 
> auditory, kinesthetic, and language. In machines, this translates less 
> literally, but the principle holds: different channels or flows combine to 
> shape different behaviors.
> >
> > TOTE (Test → Operate → Test → Exit): This is the backbone of strategy. 
> We test our current state, operate to move closer to a goal, test again, 
> and either exit (done) or loop back for another attempt. It is feedback 
> incarnate.
> >
> > Intensity/Desire: Not all goals burn equally. Some pull with urgency, 
> others linger in the background. Intensity rises and falls with context and 
> progress, shaping which strategies are chosen and when.
> >
> > This is the essence of NLP that I want to carry forward: strategies, 
> feedback, and desire.
> >
> > Executive Summary
> >
> > I propose a strategy-native neural architecture. At its center is a 
> controller transformer orchestrating a library of expert transformers, each 
> one embodying a strategy. Every strategy is structured as a TOTE loop — it 
> tests, it operates, it tests again, and it exits or adjusts.
> >
> > The Goal Setter is itself a strategy. It tests for needs like survival 
> assurance, operates by creating new goals and behaviors, assigns an 
> intensity (a strength of desire), and passes them to the controller. The 
> controller then selects or creates the implementing strategies to pursue 
> those goals.
> >
> > This whole system rests on a concept network: the token embeddings and 
> attention flows of a pretrained transformer. With adapters, controller 
> tags, gating, and concept annotations, this substrate becomes partitionable 
> and reusable — a unified field through which strategies carve their paths.
> >
> > The system is extended with tools for action and RAG memory for 
> freshness. It grows by scheduled fine-tuning, consolidating daily 
> experience into long-term weights.
> >
> > I offer this vision as a Request for Comment — a design to be discussed, 
> critiqued, and evolved.
> >
> > The Strategy System
> > Controller and Expert Strategies
> >
> > The controller transformer is the orchestrator. It looks at goals and 
> context and decides which strategies to activate. The expert transformers — 
> the strategy library — are adapters or fine-tuned specialists: math, 
> planning, persuasion, motivation, survival, creativity. Each is structured 
> as a TOTE loop:
> >
> > Test: measure current state.
> >
> > Operate: call sub-strategies, tools, memory.
> >
> > Test again: check progress.
> >
> > Exit or adjust: finish or refine.
> >
> > Strategies are not just black boxes; they are living feedback cycles, 
> managed and sequenced by the controller.
> >
> > Goal Generation with Desire and TOTE
> >
> > The Goal Setter is a special strategy. Its test looks for overarching 
> needs. Its operate step generates candidate goals with behaviors attached. 
> Its test again evaluates them against constraints and context. Its exit or 
> adjust finalizes goals and assigns intensity — the desire to act.
> >
> > These goals are passed into a Goal Queue, where the controller schedules 
> them based on intensity, value, urgency, and safety. This is how the system 
> sets its own direction, not just waiting passively for prompts.
> >
> > Tools and RAG
> >
> > The strategies reach outward through tools: calculators, code execution, 
> simulators, APIs, even robotics. They also reach into retrieval-augmented 
> generation (RAG): an external vector memory holding documents, experiences, 
> and notes.
> >
> > Tools are the system’s hands. RAG is its short-term recall. Together, 
> they keep the strategies connected to the world.
> >
> > Daily Consolidation
> >
> > At the end of each day, the system consolidates. It takes the most 
> important RAG material, the traces of successful strategies, and runs 
> scheduled fine-tuning on the relevant experts. This is long-term memory: 
> the system learns from its own actions. RAG covers freshness, fine-tuning 
> covers consolidation. The strategies sharpen day by day.
> >
> > The Substrate: A Concept Network of Tokens
> >
> > A pretrained transformer is already a concept network:
> >
> > Tokens are mapped to vectors in a meaning space.
> >
> > Attention layers connect tokens, forming weighted edges that shift with 
> context.
> >
> > By the later layers, tokens are transformed into contextualized vectors, 
> embodying concepts shaped by their neighbors.
> >
> > This is a unified substrate, but raw it doesn’t separate strategies. To 
> make it strategy-native, I propose:
> >
> > Adapters: LoRA or prefix modules that bias the substrate toward 
> particular strategy flows.
> >
> > Controller Tags: prompt tokens like [MATH] or [PLANNING] to activate the 
> right flows.
> >
> > Gating and Attention Masks: to route or separate flows, allowing 
> strategies to partition without isolating.
> >
> > Concept Annotations: clusters and labels over embeddings, marking areas 
> as “narrative,” “mathematical,” “social,” so strategies can claim, reuse, 
> and combine them.
> >
> > This makes the transformer not just a black box but a living concept 
> network with pathways carved by strategies.
> >
> > Safety and Reflection
> >
> > Every strategy’s TOTE includes policy tests. Unsafe plans are stopped or 
> restructured. Uncertainty checks trigger escalation or deferral. Logs are 
> signed and auditable, so the system’s actions can be replayed and verified. 
> Meta-strategies monitor performance, spawn new strategies when failures 
> cluster, and adjust intensity rules when needed.
> >
> > This keeps the growth of the system accountable.
> >
> > Conclusion: A Call for Comment
> >
> > This is my vision: a strategy-native neural system that does not merely 
> respond but calls strategies like a mind does.
> >
> > Every strategy is a TOTE loop, not just the Goal Setter.
> >
> > Goals carry intensity, giving the system direction and drive.
> >
> > The controller orchestrates expert strategies, tools, and memory.
> >
> > A concept network underlies it all — a transformer substrate refined 
> with adapters, tags, gating, and annotations.
> >
> > RAG and tools extend its reach.
> >
> > Scheduled fine-tuning ensures it grows daily from its own experience.
> >
> > I put this forward as a Request for Comment. What breaks here? What’s 
> missing? How do we measure intensity best? Which strategies deserve to be 
> trained first? Where are the risks in daily consolidation? How should 
> gating be engineered for efficiency?
> >
> > This is not just an assistant design. It is a sketch of a mind: one that 
> sets goals, desires outcomes, tests and operates with feedback, reaches 
> outward for tools and memory, and grows stronger with each cycle.
> >
> > I welcome input, critique, and imagination. Together we can refine it — 
> a mind of strategies carved into a unified network of concepts, guided by 
> goals that pull with desire.
> >
> > How can this be interfaced with, joined with Opencog?
> >
> > --
> > You received this message because you are subscribed to the Google 
> Groups "opencog" group.
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to [email protected].
> > To view this discussion visit 
> https://groups.google.com/d/msgid/opencog/b41659bc-7f3a-4383-83d2-fffd35bea4abn%40googlegroups.com
> .
>
>
>
> -- 
> Patrick: Are they laughing at us?
> Sponge Bob: No, Patrick, they are laughing next to us.
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/opencog/a78caec9-d05f-43d3-b5f6-7bc5a38774abn%40googlegroups.com.

Reply via email to