Here’s my fluffy perspective of this issue so far:

 

The algebraic structure are represented in graphs temporally and
neutrosophicly. The atoms dynamically change, splitting and joining, based
on input complexity flux with compression of input into local situational
symbol groups. Symbols are generated and reflected dynamically from a
complexity indexing of the input, discretely or analog continuous. The graph
dynamically generates and degenerates atoms and learning is based on the
algebraic structure complexity derived from the situational algebraic
structure complexity of input. The reaction/interaction of the graph
structure to the input structure is where it gets interesting. I see
everything as symbols, infinite symbols indexed universally but they
degenerate to finite/discrete locally which then thus can make languages or
do whatever. I’m trying to focus the operational complexity in order to
minimize number of connections. Whether or not it works in reality is
another thing.

 

I think that’s what we are talking about... ? Maybe.. 

 

Those are good interesting papers you referenced. Ben’s is quite an awesome
paper too. 

 

I sort of have to slowly arrive at my own view/invention for stuff like this
so that I can understand it more, recreating the wheel sometimes, but I look
at others work for inspiration. Even though I don’t fully understand yet
the method in Ben’s paper, I randomly sample it visually, I read it back
and forth just a few words at a time, never more than one or two sentences
in a row, with other papers and books too. It’s aesthetics I think? Whole
papers are really symbols, one paper is a symbol on a universal index of
symbols and it can be compressed or decompressed and you also can view it
initially in a compressed state I guess.. like an FFT or something? An then
they can be cross-correlated so our whole species as a multi-agent
intelligence is populating regions of the universal index...

 

John

 

From: YKY (Yan King Yin, 甄景贤) [mailto:[email protected]] 
Sent: Wednesday, February 5, 2014 2:15 PM
To: AGI
Subject: Re: [agi] Ben's geometry of mind paper

 

On Fri, Jan 31, 2014 at 6:59 PM, John Rose <[email protected]> wrote:

Not sure if this is what you are asking but ?Cmaybe you could use NCM’s
(Neutrospohic Cognitive Maps) with a neutrosphic adjacency matrix? That
might eliminate discrete “jumps”….

 

John

 

 

Thanks, I will have a look at the NCM thesis.

 

What I'm trying to do is similar to neural-symbolic integration, but my
scope is broader, in the sense that I would consider any spatial technique,
not just neural.

 

I have looked at a number of neural-symbolic proposals, but they don't seem
to be particularly efficient.  So they proved that it is feasible, but
they're still far from practical.

 

However, I am particularly impressed with the following:

 

1.  Paul Smolensky's "Tensor product variable binding and the representation
of symbolic structures in connectionist systems" (1990).  (I think Ben
recommended this one to me...)

It's capable of representing Lisp-like trees using neural networks, via
vector sums and tensor products.  This is very close to my idea of using
algebraic sums and products to represent logic formula trees.  I'm still
trying to understand Smolensky's use of tensor products.

 

His book "The harmonic mind" (2006) may be easier to read.

 

2.  "Parsing Natural Scenes and Natural  Language with Recursive Neural
Networks" Socher, Lin, Ng, Manning (2011) is also very impressive.  They're
able to use a hybrid neural-tree structure to learn to parse natural
language sentences and visual scenes.  Note: their ANN is "recursive" but
not "recurrent", it's actually feed-forward.

 

It's very inspiring because parsing is a process that can require a logic
engine, and yet they're able to use a neural network to perform the same
function...  I'm trying to see where exactly the 'cheating' is taking
place.... =)

 

Logic is slow;  my purpose is to replace the logic engine with something
faster (but approximate), and yet not losing the universal expressive power
of logic.

 


AGI |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/248029-3b178a58>
https://www.listbox.com/images/feed-icon-10x10.jpg|
<https://www.listbox.com/member/?&;>
Modify Your Subscription

 <http://www.listbox.com/>
https://www.listbox.com/images/listbox-logo-small.png

 




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

<<image001.jpg>>

<<image002.png>>

Reply via email to