On Fri, Apr 14, 2023 at 8:10 PM Félix <[email protected]> wrote:

Perhaps the years ahead will lead to other neuronal system architectures
> that will complement the "textual-generation-prediction" ones that are
> actually in vogue.
>

A new architecture is already here! I was going to write this up in another
thread, but I might as well reply here:-)

- Starting point: This article
<https://www.quantamagazine.org/a-new-approach-to-computation-reimagines-artificial-intelligence-20230413/>
in Quanta magazine.
- The solve a classic problem
<https://www.nature.com/articles/s42256-023-00630-8> link in the article
leads to a paywalled article in Nature.
  However, googling shows there is an arxiv preprint
<https://arxiv.org/abs/2203.04571> for free.

Important: the "classic problem" is *not* a math problem, but a psych test
called Raven's Progressive Matrices
<https://en.wikipedia.org/wiki/Raven%27s_Progressive_Matrices>.

I have been studying this preprint closely. The article describes NVSA:
Neuro-vector-symbolic Architecture. Googling this term found the home page
for (HD/VSA) Vector Symbolic Architecture
<https://www.hd-computing.com/#h.zgreogawc8qc>. This must be one of the
best home pages ever written!

The preprint proposes an architecture whose front end involves *perception *and
whose back end involves *reasoning*. Here is an excerpt from the summary:

"The efficacy of NVSA is demonstrated by solving the Raven’s progressive
matrices datasets...end-to-end training of NVSA achieves a new record of
87.7% average accuracy in RAVEN, and 88.1% in I-RAVEN datasets. Moreover,
...[our method] is two orders of magnitude faster [than existing
state-of-the art]. Our code is available at
https://github.com/IBM/neuro-vector-symbolic-architectures.";

This GitHub repo contains nothing but Python code. Naturally I imported
them into Leo. leo-editor-contrib now contains nvsa.leo
<https://github.com/leo-editor/leo-editor-contrib/blob/master/StudyOutlines/nvsa.leo>
.

As expected, the code's math consists *entirely *of matrix operations using
the torch and numpy libs. The only "if" statements involve handling user
options. There are a few "for" loops. I don't know how those loops affect
performance.

*Summary of the ideas*

Here is my summary of my present study, taken mainly from the HD/VSA page:

The math is unbelievably simple. Anybody who knows high school algebra will
likely understand. *All* data are vectors in a high-dimensional space.  n =
10,000 dimensions is typical. The natural distance between two vectors
is cosine
similarity <https://en.wikipedia.org/wiki/Cosine_similarity>, the
n-dimensional analog of the cosine. In other words, the *distance *between
vectors is taken to be the *angle *between the vectors.

Here's the kicker. Almost all vectors in this high-dimensional space
are *almost
orthogonal*. The converse is extremely useful. Queries are *very simple*
compositions of vectors. These queries contain "cruft", but this cruft
doesn't matter. The query is close to equal to the desired answer! These
vectors remind me of git's hashes. "Vector collisions" never happen in a
high-dimensional space!

Furthermore, with a clever design of the *contents* of the vectors, *queries
are bi-directional*!! Given the results of a query, it's trivial to find
what vectors were involved in the original query.

*Summary*

The advantages of nvsa:

- Integration of perception (front end) with reasoning (back end).
  I don't yet understand the details.
- No searching required!
- No back propagation needed for training!
- Full transparency of reasoning in the back end.
- Dead simple math, supported by far-from-dead-simple theory.

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAMF8tS3izj8JqJUWSpcOxFze9o0Eg5sJWxoARC_2-hPpk7nKgw%40mail.gmail.com.

Reply via email to