Hi Linas,

Please share those PDFs.  お願い  🙏

I’ve been searching for a unifying theory that can encompass both formal 
reasoning systems and neural nets for some time, and I suspect you might have 
it.  Or at the very least you’re much closer than I am.

My project (Hippocampus) was/is a value-flow network that could represent 
programs.  Not unlike Atomspace.  I opted for connectivity rules that I called 
“flux attenuation”.  That is to say each linkage could express a value between 
0 and 1, and collectively the “conductance” of any path through the graph could 
be evaluated using Ohm’s law.  For example, a CondLink (in Atomese parlance) 
could be thought of as a semiconductor with the conductance changing depending 
on the value passing through it.

I had two reasons for this design choice: 1.) I wanted to be able to mix and 
match subgraphs with predictable results. (first and foremost HC is aiming to 
be a programming language) and 2.) I wanted to be able to apply 
Quasi-Monte-Carlo methods (low-discrepancy sequence sampling, e.g. Halton, 
etc.) to creating a probability-distribution-function, solving an entire graph.

But, without the ability to tweak biases, HC networks are pretty much 
untrainable, compared with neural networks.  At least I haven’t been able to 
train them to do much beyond some very simple toy problems.  Maybe I’m a bad 
teacher.

HC allows NNs to be embedded inside HC nodes, e.g. a classifier could use a 
SoftMax to normalize the outputs into attenuation values, but it feels like I’m 
missing something important.

Thank you.

-Luke


> On Jan 17, 2021, at 3:21 PM, Linas Vepstas <[email protected]> wrote:
> ...
> So I've attempted to build the AtomSpace as a place to store and connect-up 
> axioms/sequents/assertions/rules with connections that are 
> probabilities/weights/fuzzy-logic values/etc. -- that is, numbers, or 
> number-like things: qubits/homogenous spaces/etc. 
> 
> If you study neural networks, you can see that they are densely connected 
> networks, with nodes, and almost all weights between almost all nodes being 
> non-zero. If you study formal mathematical proofs, you can see that they are 
> extremely sparse networks, where every node is connected to only 1 to 3 or 4 
> others, where the weights are exactly true/false/0/1.  If you study natural 
> language, and biochemistry and many other natural phenomena, you find a 
> scale-free network that is neither dense, nor is it sparse, but somewhere in 
> the middle.
> 
> I am deeply interested in converting time-ordered expressions of that network 
> into the underlying structure. (and back). So, by analogy: a seismologist, 
> all they have are some time-series recordings of Earth's vibrations; from 
> that they try to reconstruct the structure of Earth. I have a time series of 
> words, I want to reconstruct the structure of the brain that wrote those 
> words.  And, once reconstructed, what else might that "brain" have said? Just 
> like the Earth model: what other kinds of earthquakes might it produce?
> 
> I've got half-a-dozen PDFS all 20 to 100 pages long, that spin out each of 
> the above paragraphs into great detail. I think they're important, but I 
> can't get anyone to read them :-) So it goes...

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/8CD8FFA9-45D4-430C-AC76-7715BB4B6ED1%40gmail.com.

Reply via email to