>
> My matrix representation is based on the observation that AB != BA in the
> composition of concepts, for example "john loves mary != mary loves john".
>  I'm still trying to determine if the matrix representation is sound and
> consistent with our common-sense view of concepts, but it looks promising...


I may be jumping to conclusions, but it sounds like what you're doing is
placing A/"john", B/"mary", etc. as labels for both rows and columns, where
the row label represent the first value and the column represents the
second (or maybe vice versa), so that you can represent "john loves mary"
by placing a nonzero value in entry indexed by row "john" and column
"mary". The most obvious interpretation for this nonzero value would be the
probability that "john loves mary" is true, or it could possibly be
certainty or some other similar measure.

As you pointed out, this is a nice generalization of a basic adjacency
matrix, with nonzero entries marking adjacencies (directed links or edges)
in the conceptual graph. Matrices of this form could be mapped to directed
graphs with links not only labeled "loves", but having a secondary label
indicating the associated probability or certainty value of the link. In my
own system, I keep my internal representation directly in terms of nodes &
links rather than such a generalized adjacency matrix, but the specific
data structure is really only an implementation detail, not intrinsic to
the structure of the information being stored. We are basically doing the
same thing, in other words, provided my understanding of your
representational method is correct.

I'm still trying to determine if the matrix representation is sound and
> consistent with our common-sense view of concepts, but it looks promising...


I have spent a lot of time thinking about this problem, albeit in terms of
semantic nets rather than the matrices that correspond to them. I think
your approach is consistent with how we internally work with concepts, but
it is probably incomplete. For example, what happens when you want to
represent the probability or certainty that "john" is the one participating
as the subject in this "loves" relationship independently from the
probability/certainty that "mary" is the direct object? Or what if you have
lower probability/certainty for the classification of the relationship as
"loves" (as opposed to say, "hates") than you do for the individuals
filling the subject and direct object roles? It seems now that you need 3
probability/certainty numbers for each position in the matrix, one for
subject, one for direct object, and one for relationship type.

Instead of multiple values in a single cell of the matrix -- or rather 3
parallel matrices with the same row and column labels -- it seems more
natural to me to make the clause/predicate/act/relationship a separate
value in itself, so the original matrix is decomposed into (1) a "subject"
matrix, mapping the probability/certainty of "john" (among other people)
participating in the clause as the subject, (2) a "direct object" matrix,
mapping the probability/certainty of "mary" (among other people)
participating in the clause as the direct object, and (3) a "predicate"
matrix, mapping the probability/certainty that the clause's predicate is
"loves" (among other possible predicates). Each of these different
matrices, "subject", "direct object", and "predicate", maps to a link label
in the corresponding directed graph, and each link receives its own
probability/certainty value independent of the others. The representational
advantage of making the clause itself a row/column label only grows when
prepositional phrases, adverbs, indirect objects, and other clausal
modifiers start to come into play, and the utility of your matrix trick is
preserved by simply adding a new matrix for each constituent role in the
clause.



On Sun, Nov 18, 2012 at 8:10 PM, YKY (Yan King Yin, 甄景贤) <
[email protected]> wrote:

> On Sat, Nov 10, 2012 at 3:44 AM, Aaron Hosford <[email protected]>wrote:
>
>> My "matrix trick" may be able to map propositions to a high-dimensional
>>> vector space, but my assumption that concepts are matrices (that are also
>>> rotations in space) may be unjustified.  I need to find a set of criteria
>>> for matrices to represent concepts faithfully.  This direction is still
>>> hopeful.
>>
>>
>> I'm curious what your "matrix trick" is. Are you familiar with adjacency
>> matrices? They're the simplest and most common way of representing directed
>> graphs as matrices. http://en.wikipedia.org/wiki/Adjacency_matrix Semantic
>> nets typically have sparse adjacency matrices, and there are a lot of good
>> algorithms and libraries out there for efficiently representing and
>> manipulating sparse matrices.
>>
>
>
> Sorry, almost missed your post.  My method is to *represent* logical
> atoms as square matrices with real entries.  This allows me to convert the
> matrices to vectors (by simply flattening the matrices) and calculate the
> distances (ie similarities) between them via the Euclidean norm.
>
> My matrices are different from adjacency matrices which has integer
> entries.  The continuous values potentially allow learning via propagation
> of errors similar to back-propagation for neural nets.
>
> My matrix representation is based on the observation that AB != BA in the
> composition of concepts, for example "john loves mary != mary loves john".
>  I'm still trying to determine if the matrix representation is sound and
> consistent with our common-sense view of concepts, but it looks promising...
>
> YKY
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to