No, it takes a fixed "at" value. You can modify it to do whatever you want.
You will see it doesn't bother with users with little data, like <
2*at data points.

On Fri, Jan 25, 2013 at 6:23 PM, Zia mel <[email protected]> wrote:
> Interesting. Using
>  IRStatistics stats = evaluator.evaluate(recommenderBuilder,
>                                             null, model, null, 5,
>
> GenericRecommenderIRStatsEvaluator.CHOOSE_THRESHOLD,
>                                             1.0);
>
> Can it be adjusted to each user ? In other words, is there a way to
> select a threshold instead of using 5 ?  mm Something like selecting y
> set , each set have a min of z user ?
>
>
>
> On Fri, Jan 25, 2013 at 12:09 PM, Sean Owen <[email protected]> wrote:
>> The way I do it is to set x different for each user, to the number of
>> items in the user's test set -- you ask for x recommendations.
>> This makes precision == recall, note. It dodges this problem though.
>>
>> Otherwise, if you fix x, the condition you need is stronger, really:
>> each user needs >= x *test set* items in addition to training set
>> items to make this test fair.
>>
>>
>> On Fri, Jan 25, 2013 at 4:10 PM, Zia mel <[email protected]> wrote:
>>> When selecting precision at x let's say 5 , should I check that all
>>> users have 5 items or more? For example, if a user have 3 items and
>>> they were removed as top items,  then how can the recommender suggest
>>> items since there are no items to learn from?
>>> Thanks !

Reply via email to