I've been thinking about these issues for quite some time. It is my 
favorite kind of work. How does one simplify complexity? How can one 
improve a system's expressiveness and performance at the same time? How can 
one build better systems with less work and greater reliability?

There are three major recent innovations in thinking in my head, some 
echoed by similar expressions from Linas in his recent posts on "sheafs" 
for natural language grammar induction.

1) We've been thinking in nouns when verbs have all the connectivity. 
Thinking of nodes as the thing, and not the pipes of semantic connection 
between them. We've been thinking of set membership, i.e. of connections, 
when we should been thinking of the semantic flows from and towards. In 
more concrete hyper graph-terms, we need edges that have an ontology of 
connection types and categories with semantic implication. The type of 
connection or relationship between the nodes is the source of all semantic 
content. Nodes in a vacuum mean nothing.

2) The natural unit is a directed subgraph defining a precise semantic 
category instance. Some with an inflowing topology some with an outflowing, 
but each with a core node, the verb of the connection, the semantics 
represented by the presence of the connection itself; or the key noun with 
an associated attribute set. These subgraphs must include and support edges 
only partially connected either into, or out of the core, akin to + or - in 
Link Grammar terms.

3) We should be thinking of semantic flows instead of the atoms of state 
that result from changes in flows. Semantic flows represent isomorphisms of 
state applied to a sub-graph. This happens to be the starting point for 
SingulairyNET agents. What flows in as data and what flows out as results? 
So we are working on this problem at the high-level while I and others are 
thinking about how best to represent the abstract constructs definable in 
atomese.

In many ways, I see these times for AGI as akin to the early days of 
programming when we first made the jump from machine language to assembly 
language and then we got C and Fortran and Cobol, with the semantics tied 
much more closely and directly to the problem domain: whether systems- or 
scientific- or business-programming.

So I see OpenCog Atomese as the assembly code for Sophia's mind. We want it 
to stay flexible because we do not want to limit what is possible. But it 
is too much work to write in assembly all the time. we need compressions of 
complexity and a higher-level form for more efficient and expressive work 
at higher levels.

We have not yet built, anyone anywhere yet, the semantic analog to C for 
AI, let alone the more modern variants like Go, Rust, Swift or even Python. 
There is an impedance mismatch between the ways that current batch-oriented 
Von-Neumann bottlenecked systems run and the ideal ways that a mind wants 
to learn in parallel. There is a greater need for efficient shared semantic 
context among the many parts communicating. There is a greater need for 
visualization into the implications and nuanced semantics implied by the 
connections.

Much work to be done but all identified and doable.

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/843dadd4-f1e3-43e8-87b3-0cdcf910db6d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to