Also, a more recent summary of why I think their view of the *brain* is
oversimplified is here:

http://hplusmagazine.com/2012/07/20/how-the-brain-works/

-- Ben G

On Mon, Nov 19, 2012 at 4:48 PM, Ben Goertzel <[email protected]> wrote:

>
> I believe my response to Hawkins' book, written in 2004, still holds as a
> response to Kurzweil's book (with its similar theme) as well:
>
>
> http://www.goertzel.org/dynapsyc/2004/OnBiologicalAndDigitalIntelligence.htm
>
> -- Ben Goertzel
>
>
> On Mon, Nov 19, 2012 at 4:42 PM, Micah Blumberg <[email protected]> wrote:
>
>> YKY your approach does not personally make sense to me. Naturally I hope
>> you are successful. Have you read either "On Intelligence" by Jeff Hawkins
>> or "How to create a mind" by Ray Kurzweil. The latter book was recently
>> published. I suspect that many people trying to build AGI will be able to
>> consider a new approach to after reading it. Please let me know if you are
>> aware of that kind of AI and what advantages your planned system may have
>> over it.
>>
>>
>> On Sun, Nov 18, 2012 at 9:20 PM, Aaron Hosford <[email protected]>wrote:
>>
>>> My matrix representation is based on the observation that AB != BA in
>>>> the composition of concepts, for example "john loves mary != mary loves
>>>> john".  I'm still trying to determine if the matrix representation is sound
>>>> and consistent with our common-sense view of concepts, but it looks
>>>> promising...
>>>
>>>
>>> I may be jumping to conclusions, but it sounds like what you're doing is
>>> placing A/"john", B/"mary", etc. as labels for both rows and columns, where
>>> the row label represent the first value and the column represents the
>>> second (or maybe vice versa), so that you can represent "john loves mary"
>>> by placing a nonzero value in entry indexed by row "john" and column
>>> "mary". The most obvious interpretation for this nonzero value would be the
>>> probability that "john loves mary" is true, or it could possibly be
>>> certainty or some other similar measure.
>>>
>>> As you pointed out, this is a nice generalization of a basic adjacency
>>> matrix, with nonzero entries marking adjacencies (directed links or edges)
>>> in the conceptual graph. Matrices of this form could be mapped to directed
>>> graphs with links not only labeled "loves", but having a secondary label
>>> indicating the associated probability or certainty value of the link. In my
>>> own system, I keep my internal representation directly in terms of nodes &
>>> links rather than such a generalized adjacency matrix, but the specific
>>> data structure is really only an implementation detail, not intrinsic to
>>> the structure of the information being stored. We are basically doing the
>>> same thing, in other words, provided my understanding of your
>>> representational method is correct.
>>>
>>> I'm still trying to determine if the matrix representation is sound and
>>>> consistent with our common-sense view of concepts, but it looks 
>>>> promising...
>>>
>>>
>>> I have spent a lot of time thinking about this problem, albeit in terms
>>> of semantic nets rather than the matrices that correspond to them. I think
>>> your approach is consistent with how we internally work with concepts, but
>>> it is probably incomplete. For example, what happens when you want to
>>> represent the probability or certainty that "john" is the one participating
>>> as the subject in this "loves" relationship independently from the
>>> probability/certainty that "mary" is the direct object? Or what if you have
>>> lower probability/certainty for the classification of the relationship as
>>> "loves" (as opposed to say, "hates") than you do for the individuals
>>> filling the subject and direct object roles? It seems now that you need 3
>>> probability/certainty numbers for each position in the matrix, one for
>>> subject, one for direct object, and one for relationship type.
>>>
>>> Instead of multiple values in a single cell of the matrix -- or rather 3
>>> parallel matrices with the same row and column labels -- it seems more
>>> natural to me to make the clause/predicate/act/relationship a separate
>>> value in itself, so the original matrix is decomposed into (1) a "subject"
>>> matrix, mapping the probability/certainty of "john" (among other people)
>>> participating in the clause as the subject, (2) a "direct object" matrix,
>>> mapping the probability/certainty of "mary" (among other people)
>>> participating in the clause as the direct object, and (3) a "predicate"
>>> matrix, mapping the probability/certainty that the clause's predicate is
>>> "loves" (among other possible predicates). Each of these different
>>> matrices, "subject", "direct object", and "predicate", maps to a link label
>>> in the corresponding directed graph, and each link receives its own
>>> probability/certainty value independent of the others. The representational
>>> advantage of making the clause itself a row/column label only grows when
>>> prepositional phrases, adverbs, indirect objects, and other clausal
>>> modifiers start to come into play, and the utility of your matrix trick is
>>> preserved by simply adding a new matrix for each constituent role in the
>>> clause.
>>>
>>>
>>>
>>> On Sun, Nov 18, 2012 at 8:10 PM, YKY (Yan King Yin, 甄景贤) <
>>> [email protected]> wrote:
>>>
>>>> On Sat, Nov 10, 2012 at 3:44 AM, Aaron Hosford <[email protected]>wrote:
>>>>
>>>>> My "matrix trick" may be able to map propositions to a
>>>>>> high-dimensional vector space, but my assumption that concepts are 
>>>>>> matrices
>>>>>> (that are also rotations in space) may be unjustified.  I need to find a
>>>>>> set of criteria for matrices to represent concepts faithfully.  This
>>>>>> direction is still hopeful.
>>>>>
>>>>>
>>>>> I'm curious what your "matrix trick" is. Are you familiar with
>>>>> adjacency matrices? They're the simplest and most common way of
>>>>> representing directed graphs as matrices.
>>>>> http://en.wikipedia.org/wiki/Adjacency_matrix Semantic nets typically
>>>>> have sparse adjacency matrices, and there are a lot of good algorithms and
>>>>> libraries out there for efficiently representing and manipulating sparse
>>>>> matrices.
>>>>>
>>>>
>>>>
>>>> Sorry, almost missed your post.  My method is to *represent* logical
>>>> atoms as square matrices with real entries.  This allows me to convert the
>>>> matrices to vectors (by simply flattening the matrices) and calculate the
>>>> distances (ie similarities) between them via the Euclidean norm.
>>>>
>>>> My matrices are different from adjacency matrices which has integer
>>>> entries.  The continuous values potentially allow learning via propagation
>>>> of errors similar to back-propagation for neural nets.
>>>>
>>>> My matrix representation is based on the observation that AB != BA in
>>>> the composition of concepts, for example "john loves mary != mary loves
>>>> john".  I'm still trying to determine if the matrix representation is sound
>>>> and consistent with our common-sense view of concepts, but it looks
>>>> promising...
>>>>
>>>> YKY
>>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>> <http://www.listbox.com>
>>>>
>>>
>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/23601407-ccf7ca1d> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>
>>
>> --
>>
>> ~~
>> Warmly,
>>
>>
>> Micah
>> 7 1 4 ) 6 9 9 - 4 2 1 3 (voicemail and texting same digits)
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/212726-c2d57280> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> "My humanity is a constant self-overcoming" -- Friedrich Nietzsche
>
>


-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to