On Tue, Jul 19, 2011 at 1:07 AM, Sebastian Schelter <[email protected]> wrote:

> On 19.07.2011 01:32, Ted Dunning wrote:
>
>> Well... it makes it uncomputable in explicit form.  Sometimes there are
>> implicit forms for the matrix that keeps the size in bounds.  For
>> instance,
>> limited rank decompositions (aka truncated singular value decompositions)
>> can be represented by storing two skinny matrices and a diagonal that
>> don't
>> take much more memory/disk than the original data.
>>
>
> Aren't the user/item feature matrices already a form of this?
>

Yes.


> I think the basic question was how to compute recommendations for a
> particular user. You could just predict his preference for all items he has
> not yet seen by multiplying the item features matrix with his user feature
> vector to get the estimated preferences but you cannot to this for all users
> at once, right?
>

If you trim the recommendations to the top N, you probably can do this
computation.


> So the question would be how to find the initial "candidate items" from the
> item feature matrix.
>

Precisely put.  You can do this ahead of time and save vats of work or just
do the multiplication which will take a while to do, but you only need to
keep one dense vector at a time.

Reply via email to