I just wrote up a new blog post on ... well, the usual topic. I'm cc'ing
the Link Grammar mailing list, as it has been instrumental in waking me to
these ideas.

-- Linas

---------- Forwarded message ---------
From: OpenCog Brainwave <[email protected]>
Date: Thu, Jun 10, 2021 at 6:55 PM
Subject: [New post] Everything is a Network
To: <[email protected]>


Linas Vepstas posted: "The goal of AGI is to create a thinking machine, a
thinking organism, an algorithmic means of knowledge representation,
knowledge discovery and self-expression. There are two conventional
approaches to this endeavor. One is the ad hoc assembly of assorted"

New post on *OpenCog Brainwave*
<https://blog.opencog.org/?author=5> Everything is a Network
<https://blog.opencog.org/2021/06/10/everything-is-a-network/> by Linas
Vepstas <https://blog.opencog.org/?author=5>

The goal of AGI
<https://en.wikipedia.org/wiki/Artificial_general_intelligence> is to
create a thinking machine, a thinking organism, an algorithmic means
of knowledge
representation
<https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning>,
knowledge
discovery <https://en.wikipedia.org/wiki/Knowledge_extraction> and
self-expression <https://en.wikipedia.org/wiki/Natural_language_generation>.
There are two conventional approaches to this endeavor. One is the ad hoc
assembly of assorted technology pieces-parts,
<https://www.youtube.com/watch?v=y_oem9BqUTI> with the implicit belief
that, after some clever software engineering, it will just come alive. The
other approach is to propose some grand over-arching theory-of-everything
that, once implemented in software, will just come alive and become the
Singularity <https://en.wikipedia.org/wiki/Technological_singularity>.

This blog post is a sketch of the second case. As you read what follows,
your eyes might glaze over, and you might think to yourself, "oh this is
silly, why am I wasting my time reading this?" The reason for this is that,
to say what I need to say, I must necessarily talk in such generalities,
and provide such silly, childish examples, that it all seems a bit vapid.
The problem is that a theory of everything must necessarily talk about
everything, which is hard to do without saying things that seem obvious. Do
not be fooled. What follows is backed up by some deep and very abstract
mathematics that few have access to. I'll try to summon a basic
bibliography at the end, but, for most readers who have not been studying
the mathematics of knowledge for the last few decades, the learning curve
will be impossibly steep. This is an expedition to the Everest of
intellectual pursuits. You can come at this from any (intellectual) race,
creed or color; but the formalities may likely exhaust you. That's OK. If
you have 5 or 10 or 20 years, you can train and work out and lift weights.
You can get there. And so... on with the show.

The core premise is that "everything is a network
<https://en.wikipedia.org/wiki/Network_theory>" -- By "network", I mean a
graph <https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)>,
possibly with directed edges, usually with typed
<https://en.wikipedia.org/wiki/Type_theory> edges, usually with weights,
numbers, and other data on each vertex or edge. By "everything" I mean
"everything". Knowledge, language, vision, understanding, facts, deduction,
reasoning, algorithms, ideas, beliefs ... biological molecules...
everything.

A key real-life "fact" about the "graph of everything" is it consists
almost entirely of repeating sub-patterns. For example, "the thigh bone is
connected to the hip bone <https://en.wikipedia.org/wiki/Dem_Bones>" --
this is true generically for vertebrates
<https://en.wikipedia.org/wiki/Vertebrate>, no matter which animal it might
be, or if it's alive or dead, it's imaginary or real. The patterns may be
trite, or they may be complex. For images/vision, an example might be "select
all photos containing a car <https://en.wikipedia.org/wiki/CAPTCHA>" --
superficially, this requires knowing how cars look alike, and what part of
the pattern is important (wheels, windshields) and what is not (color,
parked in a lot or flying through space
<https://where-is-tesla-roadster.space/live>).

The key learning task is to find such recurring patterns, both in fresh
sensory input (what "the computer" is seeing/hearing/reading right now) and
in stored knowledge (when processing a dataset - previously-learned,
remembered knowledge - for example, a dataset of medical symptoms). The
task is not just "pattern recognition
<https://en.wikipedia.org/wiki/Pattern_recognition>" identifying a photo of
a car, but of pattern discovery
<https://en.wikipedia.org/wiki/Frequent_pattern_discovery> -- learning that
there are things in the universe called "cars", and that they have wheels
and windows -- extensive and intensive properties.

Learning does not mean "training
<https://en.wikipedia.org/wiki/Training,_validation,_and_test_sets>" -- of
course, one can train, but AGI cannot depend on some pre-existing dataset,
gathered by humans, annotated by humans. Learning really means that,
starting from nothing at all, except one's memories, one's sensory inputs,
and one's wits and cleverness, one discovers something new, and remembers
it.

OK, fine, the above is obvious to all. The novelty begins here: The best
way to represent a graph with recurring elements in it is with "jigsaw
puzzle <https://en.wikipedia.org/wiki/Jigsaw_puzzle> pieces". (and NOT with
vertexes and edges!!) The pieces represent the recurring elements, and the
"connectors" on the piece indicate how the pieces are allowed to join
together. For example, the legbone has a jigsaw-puzzle-piece connector on
it that says it can only attach to a hipbone. This is true not only
metaphorically, but (oddly enough) literally! So when I say "everything is
a network" and "the network is a composition of jigsaw puzzle pieces", the
deduction is "everything can be described with these (abstract) jigsaw
pieces."

That this is the case in linguistics has been repeatedly rediscovered by
more than a few linguists. It is explained perhaps the most clearly and
directly in the original
<https://www.cs.cmu.edu/afs/cs.cmu.edu/project/link/pub/www/papers/ps/tr91-196.pdf>
Link
Grammar
<https://www.cs.cmu.edu/afs/cs.cmu.edu/project/link/pub/www/papers/ps/LG-IWPT93.pdf>
papers, although I can point at some other writings as well; one from
a "classical"
(non-mathematical) humanities-department linguist
<https://www.academia.edu/36534355/The_Molecular_Level_of_Lexical_Semantics_by_EA_Nida>;
another from a hard-core mathematician - a category theorist - who
rediscovered this from thin air
<http://www.cs.ox.ac.uk/people/bob.coecke/NewScientist.pdf>. Once you know
what to look for, its freakin everywhere.  Say, in biology, the Krebs cycle
<https://en.wikipedia.org/wiki/Citric_acid_cycle> (citric acid cycle) -
some sugar molecules come in, some ATP goes out, and these chemicals relate
to each other not only abstractly as jigsaw-pieces, but also literally, in
that they must have the right shapes
<https://en.wikipedia.org/wiki/Molecular_recognition>! The carbon atom
itself is of this very form: it can connect, by bonds, in very specific
ways. Those bonds, or rather, the possibility of those bonds, can be
imagined as the connecting tabs on jigsaw-puzzle pieces.  This is not just
a metaphor, it can also be stated in a very precise mathematical sense. (My
lament: the mathematical abstraction to make this precise puts it out of
reach of most.)

The key learning task is now transformed into one of discerning the shapes
of these pieces
<https://github.com/opencog/atomspace/blob/master/opencog/sheaf/docs/sheaves.pdf>,
given a mixture of "what is known already" plus "sensory data". The
scientific endeavor is then: "How to do this?" and "How to do this quickly,
efficiently, effectively?" and "How does this relate to other theories,
e.g. neural networks
<https://en.wikipedia.org/wiki/Artificial_neural_network>?" I believe the
answer to the last question is "yes, its related", and I can kind-of
explain how
<https://github.com/opencog/learn/blob/master/learn-lang-diary/skippy.pdf>.
The answer to the first question is "I have a provisional way of doing this
<https://github.com/opencog/learn>, and it seems to work
<https://github.com/opencog/learn/blob/master/learn-lang-diary/connector-sets-revised.pdf>".
The middle question - efficiency? Ooooof. This part is ... unknown.

There is an adjoint task to learning, and that is expressing and
communicating. Given some knowledge, represented in terms of such jigsaw
pieces, how can it be converted from its abstract form (sitting in RAM, on
the computer disk), into communications: a sequence of words, sentences, or
a drawing, painting?

That's it. That's the meta-background. At this point, I imagine that you,
dear reader, probably feel no wiser than you did before you started
reading. So what can I say to impart actual wisdom? Well, lets try an argument
from authority <https://en.wikipedia.org/wiki/Argument_from_authority>: a
jigsaw-puzzle piece is an object in an (asymmetric) monoidal category
<https://en.wikipedia.org/wiki/Monoidal_category>. The internal language of
that category is ... a language ... a formal language
<https://en.wikipedia.org/wiki/Formal_language> having a syntax
<https://en.wikipedia.org/wiki/Syntax>. Did that make an impression?
Obviously, languages (the set of all syntactically valid expressions)
and model-theoretic
theories <https://en.wikipedia.org/wiki/Model_theory> are dual to
one-another (this is obvious only if you know model theory). The learning
task is to discover the structure
<https://en.wikipedia.org/wiki/Model_(model_theory)>, the collection of
types <https://en.wikipedia.org/wiki/Type_(model_theory)>, given the
language <https://en.wikipedia.org/wiki/Text_corpus>.  There is a wide
abundance of machine-learning software that can do this in narrow, specific
domains. There is no machine learning software that can do this in the
fully generic, fully abstract setting of ... jigsaw puzzle pieces.

Don't laugh. Reread this blog post from the beginning, and everywhere that
you see "jigsaw piece", think "syntactic, lexical element of a monoidal
category", and everywhere you see "network of everything", think "model
theoretic language".  Chew on this for a while, and now think: "Is this
doable? Can this be encoded as software? Is it worthwhile? Might this
actually work?". I hope that you will see the answer to all of these
questions is yes.

And now for a promised bibliography. The topic both deep and broad. There's
a lot to comprehend, a lot to master, a lot to do. And, ah, I'm exhausted
from writing this; you might be exhausted from reading.  A provisional
bibliography can be obtained from two papers I wrote on this topic:

   - Sheaves: A Topological Approach to Big Data
   
<https://github.com/opencog/atomspace/blob/master/opencog/sheaf/docs/sheaves.pdf>
   - Neural-Net vs. Symbolic Machine Learning
   <https://github.com/opencog/learn/blob/master/learn-lang-diary/skippy.pdf>

The first paper is rather informal. The second invoked a bunch of math.
Both have bibliographies. There are additional PDF's in each of the
directories that fill in more details.

This is the level I am currently trying to work at. I invite all interested
parties to come have a science party, and play around and see how far this
stuff can be made to go.
*Linas Vepstas <https://blog.opencog.org/?author=5>* | June 10, 2021 at
11:55 pm | Categories: Uncategorized
<https://blog.opencog.org/?taxonomy=category&term=uncategorized> | URL:
https://wp.me/p9hhnI-cl

Comment
<https://blog.opencog.org/2021/06/10/everything-is-a-network/#respond>    See
all comments
<https://blog.opencog.org/2021/06/10/everything-is-a-network/#comments>

Unsubscribe
<https://public-api.wordpress.com/bar/?stat=groovemails-events&bin=wpcom_email_click&redirect_to=https%3A%2F%2Fsubscribe.wordpress.com%2F%3Fkey%3Da9a418716d1b232b4d6f1b1829be75ae%26email%3Dlinasvepstas%2540gmail.com%26b%3DIuFxBBRYbGv3Hprjfcb1nN1xTFHgj5HkuY-x-lT6QKxqkZMgov9AM8QuMvodMCGn32Q3oTFW0et24AFIz1oCUdSZiyQlYOmTn36q6nLoKLLJow%253D%253D&sr=1&signature=674cb680ce749d47971c3d661d17cd4d&user=3747872&_e=eyJlcnJvciI6bnVsbCwiYmxvZ19pZCI6MTM3MTA1NDE4LCJibG9nX2xhbmciOiJlbiIsInNpdGVfaWRfbGFiZWwiOiJqZXRwYWNrIiwiZW1haWxfbmFtZSI6ImVtYWlsX3N1YnNjcmlwdGlvbiIsIl91aSI6Mzc0Nzg3MiwiZW1haWxfaWQiOiJiYjI0YmQwYTQwZjE3YTRlY2EzZmU5OGQwNzBkZmE3YSIsImRhdGVfc2VudCI6IjIwMjEtMDYtMTAiLCJsb2NhbGUiOiJlbiIsImN1cnJlbmN5IjoiVVNEIiwiY291bnRyeV9jb2RlX3NpZ251cCI6IlVTIiwiZG9tYWluIjoiYmxvZy5vcGVuY29nLm9yZyIsImZyZXF1ZW5jeSI6IjAiLCJkaWdlc3QiOiIwIiwiaGFzX2h0bWwiOiIxIiwiYW5jaG9yX3RleHQiOiJVbnN1YnNjcmliZSIsIl9kciI6bnVsbCwiX2RsIjoiXC94bWxycGMucGhwP3N5bmM9MSZjb2RlYz1kZWZsYXRlLWpzb24tYXJyYXkmdGltZXN0YW1wPTE2MjMzNjkzNDIuNjM5NCZxdWV1ZT1zeW5jJmhvbWU9aHR0cHMlM0ElMkYlMkZibG9nLm9wZW5jb2cub3JnJnNpdGV1cmw9aHR0cHMlM0ElMkYlMkZibG9nLm9wZW5jb2cub3JnJmNkPTAuMDAxNCZwZD0wLjAwNDImcXVldWVfc2l6ZT00JmJ1ZmZlcl9pZD02MGMyYTY3ZTlhY2VlJnRpbWVvdXQ9MTUmZm9yPWpldHBhY2smd3Bjb21fYmxvZ19pZD0xMzcxMDU0MTgiLCJfdXQiOiJ3cGNvbTp1c2VyX2lkIiwiX3VsIjoibGluYXN2IiwiX2VuIjoid3Bjb21fZW1haWxfY2xpY2siLCJfdHMiOjE2MjMzNjkzNDYzMzAsImJyb3dzZXJfdHlwZSI6InBocC1hZ2VudCIsIl9hdWEiOiJ3cGNvbS10cmFja3MtY2xpZW50LXYwLjMiLCJibG9nX3R6IjoiMCIsInVzZXJfbGFuZyI6ImVuIn0&_z=z>
to no longer receive posts from OpenCog Brainwave.
Change your email settings at Manage Subscriptions
<https://public-api.wordpress.com/bar/?stat=groovemails-events&bin=wpcom_email_click&redirect_to=https%3A%2F%2Fsubscribe.wordpress.com%2F%3Fkey%3Da9a418716d1b232b4d6f1b1829be75ae%26email%3Dlinasvepstas%2540gmail.com&sr=1&signature=ece1c66e405464c8308b765ebceb3f20&user=3747872&_e=eyJlcnJvciI6bnVsbCwiYmxvZ19pZCI6MTM3MTA1NDE4LCJibG9nX2xhbmciOiJlbiIsInNpdGVfaWRfbGFiZWwiOiJqZXRwYWNrIiwiZW1haWxfbmFtZSI6ImVtYWlsX3N1YnNjcmlwdGlvbiIsIl91aSI6Mzc0Nzg3MiwiZW1haWxfaWQiOiJiYjI0YmQwYTQwZjE3YTRlY2EzZmU5OGQwNzBkZmE3YSIsImRhdGVfc2VudCI6IjIwMjEtMDYtMTAiLCJsb2NhbGUiOiJlbiIsImN1cnJlbmN5IjoiVVNEIiwiY291bnRyeV9jb2RlX3NpZ251cCI6IlVTIiwiZG9tYWluIjoiYmxvZy5vcGVuY29nLm9yZyIsImZyZXF1ZW5jeSI6IjAiLCJkaWdlc3QiOiIwIiwiaGFzX2h0bWwiOiIxIiwiYW5jaG9yX3RleHQiOiJNYW5hZ2UgU3Vic2NyaXB0aW9ucyIsIl9kciI6bnVsbCwiX2RsIjoiXC94bWxycGMucGhwP3N5bmM9MSZjb2RlYz1kZWZsYXRlLWpzb24tYXJyYXkmdGltZXN0YW1wPTE2MjMzNjkzNDIuNjM5NCZxdWV1ZT1zeW5jJmhvbWU9aHR0cHMlM0ElMkYlMkZibG9nLm9wZW5jb2cub3JnJnNpdGV1cmw9aHR0cHMlM0ElMkYlMkZibG9nLm9wZW5jb2cub3JnJmNkPTAuMDAxNCZwZD0wLjAwNDImcXVldWVfc2l6ZT00JmJ1ZmZlcl9pZD02MGMyYTY3ZTlhY2VlJnRpbWVvdXQ9MTUmZm9yPWpldHBhY2smd3Bjb21fYmxvZ19pZD0xMzcxMDU0MTgiLCJfdXQiOiJ3cGNvbTp1c2VyX2lkIiwiX3VsIjoibGluYXN2IiwiX2VuIjoid3Bjb21fZW1haWxfY2xpY2siLCJfdHMiOjE2MjMzNjkzNDYzMzAsImJyb3dzZXJfdHlwZSI6InBocC1hZ2VudCIsIl9hdWEiOiJ3cGNvbS10cmFja3MtY2xpZW50LXYwLjMiLCJibG9nX3R6IjoiMCIsInVzZXJfbGFuZyI6ImVuIn0&_z=z>.


*Trouble clicking?* Copy and paste this URL into your browser:
https://blog.opencog.org/2021/06/10/everything-is-a-network/



-- 
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA34gQSDZnbkV1Mp7aEOSGoENOkQTAZoBBRVL6%3DwJWfJv%3DA%40mail.gmail.com.

Reply via email to