YKY,
Although I don't have a good understanding of what you are doing and
thinking about, I appreciate the fact that you are able to find terms that
are known to a broader AI/AGI audience to try to explain your ideas.

Will the branches (or end branches) or other areas of the logic formula
trees represent different categories other than the de facto
categories that might be inferred by the terms (that are represented at
those points)?
Jim Bromer

Jim Bromer


On Wed, Feb 5, 2014 at 2:14 PM, YKY (Yan King Yin, 甄景贤) <
[email protected]> wrote:

> On Fri, Jan 31, 2014 at 6:59 PM, John Rose <[email protected]>wrote:
>
>> Not sure if this is what you are asking but ?Cmaybe you could use NCM’s
>> (Neutrospohic Cognitive Maps) with a neutrosphic adjacency matrix? That
>> might eliminate discrete “jumps”….
>>
>>
>>
>> John
>>
>
>
> Thanks, I will have a look at the NCM thesis.
>
> What I'm trying to do is similar to neural-symbolic integration, but my
> scope is broader, in the sense that I would consider any spatial technique,
> not just neural.
>
> I have looked at a number of neural-symbolic proposals, but they don't
> seem to be particularly efficient.  So they proved that it is feasible, but
> they're still far from practical.
>
> However, I am particularly impressed with the following:
>
> 1.  Paul Smolensky's "Tensor product variable binding and the
> representation of symbolic structures in connectionist systems" (1990).  (I
> think Ben recommended this one to me...)
>
> It's capable of representing Lisp-like trees using neural networks, via
> vector sums and tensor products.  This is very close to my idea of using
> algebraic sums and products to represent logic formula trees.  I'm still
> trying to understand Smolensky's use of tensor products.
>
> His book "The harmonic mind" (2006) may be easier to read.
>
> 2.  "Parsing Natural Scenes and Natural  Language with Recursive Neural
> Networks" Socher, Lin, Ng, Manning (2011) is also very impressive.  They're
> able to use a hybrid neural-tree structure to learn to parse natural
> language sentences and visual scenes.  Note: their ANN is "recursive" but
> not "recurrent", it's actually feed-forward.
>
> It's very inspiring because parsing is a process that can require a logic
> engine, and yet they're able to use a neural network to perform the same
> function...  I'm trying to see where exactly the 'cheating' is taking
> place.... =)
>
> Logic is slow;  my purpose is to replace the logic engine with something
> faster (but approximate), and yet not losing the universal expressive power
> of logic.
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/24379807-f5817f28> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to