Yes, I would still say so. You could still easily find this too slow
if you're using user-user similarities and there are a lot of users
and few items behind these 100M data points. Or vice versa. Past this
point it's almost certainly too slow; before this point it could also
be slow. You would tend to choose user-based if you have relatively
fewer users. I don't know if there's a hard-and-fast guideline there.

On Wed, Oct 26, 2011 at 2:50 PM, Grant Ingersoll <[email protected]> wrote:
> Sorry, should have been more clear.  I was referring to if one is using a 
> user based recommender (e.g GenericUserBasedRecommender) vs. item based 
> recommender.  Our general recommendation is that user based approaches won't 
> scale, I was wondering what the general cutoff is on a single machine, more 
> or less.  Is it still 100M data points, roughly speaking?
>

Reply via email to