Zeroing in on the topic:
I have:
1) a set of raw input vectors of a given length, one for each item.
Each value in the vectors are geometric, not bag-of-words or other.
The matrix is [# items , # dimensions].
2) An SVD of same:
left matrix of [ # items, #d features per item] * singular
vector[# features] * right matrix of [#dimensions features per
dimension, #dimensions].
3) The first few columns of the left matrix are interesting singular
eigenvectors.
I would like to:
1) relate the singular vectors to the item vectors, such that they
create points in the "hot spots" of the item vectors.
2) find the inverses: a singular vector has two endpoints, and both
represent "hot spots" in the item space.
Given the first 3 singular vectors, there are 6 "hot spots" in the
item vectors, one for each end of the vector. What transforms are
needed to get the item vectors and the singular vector endpoints in
the same space? I'm not finding the exact sequence.
A use case for this is a new user. It gives a quick assessment by
asking where the user is on the few common axes of items:
"Transformers 3: The Stupiding" v.s. "Crazy Bride Wedding Love
Planner"?
On Mon, Jul 11, 2011 at 8:56 PM, Lance Norskog <[email protected]> wrote:
> SVDRecommender is intriguing, thanks for the pointer.
>
> On Sun, Jul 10, 2011 at 12:15 PM, Ted Dunning <[email protected]> wrote:
>> Also, item-item similarity is often (nearly) the result of a matrix product.
>> If yours is, then you can decompose the user x item matrix and the desired
>> eigenvalues are the singular values squared and the eigen vectors are the
>> right singular vectors for the decomposition.
>>
>> On Sun, Jul 10, 2011 at 2:51 AM, Sean Owen <[email protected]> wrote:
>>
>>> So it sounds like you want the SVD of the item-item similarity matrix?
>>> Sure,
>>> you can use Mahout for that. If you are not in Hadoop land then look at
>>> SVDRecomnender to crib some related code. It is decomposing the user item
>>> matrix though.
>>>
>>> But for this special case of a symmetric matrix your singular vectors are
>>> the eigenvectors which you may find much easier to compute.
>>>
>>> I might restate the interpretation.
>>> The 'size' of these vectors is not what matters to your question. It is
>>> which elements (items) have the smallest vs largest values .
>>> On Jul 10, 2011 3:08 AM, "Lance Norskog" <[email protected]> wrote:
>>>
>>
>
>
>
> --
> Lance Norskog
> [email protected]
>
--
Lance Norskog
[email protected]